Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Will you update to iOS 15 and let Apple’s operating system search your personal devices for CSAM?

  • Yes, I will update to iOS 15

    Votes: 117 68.0%
  • No, I will stay on iOS 14.7 or earlier

    Votes: 55 32.0%

  • Total voters
    172
You’re missing the point entirely.

I don't think I am. I understand that what you're saying is happening or can happen is a bad thing, but I'm saying at least it's not underhanded - there's still transparency. People are still informed and have the freedom to NOT participate in that (no one NEEDS to use iCloud for photos or other content in question). I thought the real deep-down concern people had regarding abuse would be that they would be "surprised" by it somehow - like it was happening without their knowledge.
 
Turning off the photos toggle in iCloud settings should be enough to stop it.

I understand that. I was just clarifying that when we say it's not optional, it's assumed and obvious that we're talking about people using iCloud photos. To say it's optional because you don't have to use iCloud for photos is sort of like saying "Drug tests at your workplace are optional. Just quit!" LOL!
 
I understand that. I was just clarifying that when we say it's not optional, it's assumed and obvious that we're talking about people using iCloud photos. To say it's optional because you don't have to use iCloud for photos is sort of like saying "Drug tests at your workplace are optional. Just quit!" LOL!

I said iCloud photo upload is optional and you responded it’s not optional when using iCloud. You’re just trying to adapt your argument to avoid being wrong so I’m over this conversation.
 
  • Wow
Reactions: usagora
I said iCloud photo upload is optional and you responded it’s not optional when using iCloud. You’re just trying to adapt your argument to avoid being wrong so I’m over this conversation.

Um, what? That's what I've been talking about the whole time. It's assumed we're talking about iCloud for photos since this is all about . . . you know, PHOTOS being scanned. smh... there's this thing called "context" in conversations. The on-device scanning is part and parcel to using iCloud for photos, so people who currently are using iCloud for photos do NOT have the option to "opt out of" or "opt into" the scan. Saying it's "optional" because you can just turn off the Photo upload is sort of pointless, because then they don't get to use the service at all - thus proving it's NOT optional - it's mandatory in order to use the service. That's not be trying to avoid being wrong; that's just facts 🤷‍♂️
 
  • Like
Reactions: TimFL1
So riddle me this... Apple has been doing facial and object recognition on your devices for quite awhile now...since iOS 10, have they not? They told us we could search by people, animals, plants, landscapes, etc. How do we know what other types of recognition they could have been doing behind the scenes and not being exposed to us? Obviously they been working to identify body parts, because soon they'll be able to warn kids if an image may contain one (this feature is not to be confused with the CSAM stuff). So theoretically, Apple could have been keeping track of how many explicit photos you may have, or how many gun photos you may have all along since iOS 10. The "scanning" of your photos has been around for quite awhile, so why the sudden uproar now? It feels a little too late in my opinion (and maybe should have stayed on iOS 9).

Not to mention, this type of scanning for object recognition (using AI), is not the same type of scanning used for CSAM (hashes). I really couldn't care less if Apple knows that one of my photos hashes out to "D9CCE882EE690AF3A78C77". That is keeping privacy in mind, as the hash does not identify what was in the photo, nor can it be reversed to regenerate the photo. (Of course if it was a match in the database, then yes, they can see the photo to verify if it was a positive match or not.)

Google has been doing CSAM scanning for years now. However, not only do they check the hashes against a database with known material, they also use AI to to help detect new material that haven't been identified yet. This is where it's crossing the line of privacy, and I think Apple knows that.
 
Last edited:
So when they talk about scanning the photos before they uploaded to the iCloud, does that refer to only new photos uploaded once the update hits? Or all photos that you currently already have uploaded in iCloud? Trying to see if I need to delete all photos containing my family and sensitive work documents, etc. or if it’s not even worth it if they’re going to be scanned anyway.
 
So riddle me this... Apple has been doing facial and object recognition on your devices for quite awhile now...since iOS 10, have they not? They told us we could search by people, animals, plants, landscapes, etc. How do we know what other types of recognition they could have been doing behind the scenes and not being exposed to us? Obviously they been working to identify body parts, because soon they'll be able to warn kids if an image may contain one (this feature is not to be confused with the CSAM stuff). So theoretically, Apple could have been keeping track of how many explicit photos you may have, or how many gun photos you may have all along since iOS 10. The "scanning" of your photos has been around for quite awhile, so why the sudden uproar now? It feels a little too late in my opinion (and maybe should have stayed on iOS 9).

Not to mention, this type of scanning for object recognition (using AI), is not the same type of scanning used for CSAM (hashes). I really couldn't care less if Apple knows that one of my photos hashes out to "D9CCE882EE690AF3A78C77". That is keeping privacy in mind, as the hash does not identify what was in the photo, nor can it be reversed to regenerate the photo. (Of course if it was a match in the database, then yes, they can see the photo to verify if it was a positive match or not.)

Google has been doing CSAM scanning for years now. However, not only do they check the hashes against a database with known material, they also use AI to to help detect new material that haven't been identified yet. This is where it's crossing the line of privacy, and I think Apple knows that.
We don’t know, but the “scanning” was promised to only be on-device and not sent to Apple. CSAM detection very explicitly says that matched pictures have their hashes sent to Apple as part of the photo, and if enough matched pictures are found, it is all flagged and reviewed by a human. From there, it goes to authorities as your Apple ID is locked down. So now, if you do collect CSAM material (plus whatever else material is disallowed in the future), instead of detectives and police working a case and needing a warrant to search your devices, your device waves a big red flag and rats you out. That’s a dangerous change of pace, even if it is done for good.

Oh, how I wanted to stay on iOS 9. The gain of AI features in iOS 10, along with the bubbly interface, just disgusted me. iOS 9 had a professional look to it and ran rather well on my devices. That is, until my device actually de-activated itself and forced me to update it (that series of events led to me making my signature here what it is).

When it comes to Google, it’s hard for that company to surprise me now. Usually, it’s when I find out that some privacy-invading method of collecting data ISN’T being used by them. There’s a reason I haven’t touched their software in 6 years. What creeped me out was that Google Maps had a full record, not only of places I’ve been, but the routes I took to get there, despite me not using navigation and having cellular data turned off while in the car. No wonder battery life was crap on my old Android phone, it had the GPS on full-time in the background, unbeknownst to GPS notification apps. At least then, one could disable Google Play Services and shut it all down, but no longer. I actually went a year or two with a flip phone after that because I lost all trust in smartphones. I experimented with iOS on an iPod Touch but still managed to be creeped out by the fact that it had pinpoint accuracy on my location with only nearby Wi-Fi networks (which I wasn’t connected to).
 
So when they talk about scanning the photos before they uploaded to the iCloud, does that refer to only new photos uploaded once the update hits? Or all photos that you currently already have uploaded in iCloud? Trying to see if I need to delete all photos containing my family and sensitive work documents, etc. or if it’s not even worth it if they’re going to be scanned anyway.
That’s a very good question. Plus, when they find out that most people with stuff to hide don’t use the cloud, when will it change to scan and report directly from the device without iCloud?
 
  • Like
Reactions: user_xyz
So when they talk about scanning the photos before they uploaded to the iCloud, does that refer to only new photos uploaded once the update hits? Or all photos that you currently already have uploaded in iCloud? Trying to see if I need to delete all photos containing my family and sensitive work documents, etc. or if it’s not even worth it if they’re going to be scanned anyway.
“Scanning” isn’t the best word used for this topic. To put it simple, they are basically converting your photo to a unique number, and then checking to see if it’s in a database of known numbers. They aren’t actively looking at your photos or gathering information from them.

Here’s Apple’s technical summary on this feature. It doesn’t answer your question, but may be of interest.
 
  • Like
Reactions: usagora
Just like someone else in the comment, I’m also in iOS 15 beta and my watch is also in beta so I can’t even downgrade anymore. Plus, my entire library has been in the cloud for years which I believe being scanned to death at this point so whatever.
 
I’m on the beta so I’ll say that puts me in the “yes” automatically.

Do I wonder what this could be used to do in the future? Sure I do. At the same time though, it would be an absolute reputation killer for Apple if this goes wrong. They are where they are partly because of that reputation. They don’t want to blow it.

I am the father of a daughter, so I actually like the sound of the parental notification thing. She’s not at the age to have a phone yet though, so that’s not a feature I’d be utilizing anytime soon.
I hope, it will destroy apples reputation, as it IS a reputation killer.

I do support this parental control, as kids obviously need to be teached, how to use these modern communication devices, but I do not support getting my photos scanned. I do not have any illegit pictures, but the scan just says: Dear user, you might possibly be lying regarding the contents of your picture library, so we are controlling it to be sure. No Apple, you are not!
 
  • Like
Reactions: 09872738 and dk001
The amount of hyperbole, nonsense, miss-information and non-issues in this thread with regard to CSAM is startling IMO.

I don’t know if this is down to Apple not explaining what they are doing with this feature (I do believe they are at least partly to blame), or people can’t be bothered reading up and just instantly get on their high horses, jumping on social media and saying Apple are the devil and we’re not upgrading and they are crap and they deserve their reputation being tarnished forever blah blah blah. Crazy, just crazy!

I have read this entire thread, and I don’t think I have read a genuine case or example given where people have actually raised a 100% valid point (I may absolutely be wrong, so please point me in the right direction if I’ve missed it). What is there to be scared of? No one is seeing your pictures. No one is seeing your documents. And in the massively unlikely event of a false-positive being raised and then being sent to the relevant authorities, when a human being does actually see the picture they will realise there is nothing to be concerned about. What is the problem?

I might be missing something here, I just don’t see what all the exclaiming is about. Nobody seemed to bat an eyelid when Google introduced it on their systems years ago. And I know that they don’t have the same reputation for privacy that Apple strive for and are proponents of, but it’s like some people here think Apple are going to be going through, personally, every single one of your pictures and documents and scrutinising it.
 
So now I can get in trouble when some random joker sends me a picture over SMS? What happens when an account is breached and a bunch of “unapproved” photos get uploaded to iCloud and then synced to your devices?

Have you ever had child abuse images sent to you as a joke or your Apple account breached? It is technically possible, no doubt, but what are the actual chances?
 
So now I can get in trouble when some random joker sends me a picture over SMS? What happens when an account is breached and a bunch of “unapproved” photos get uploaded to iCloud and then synced to your devices?

Not going to happen. You cannot and will not be blamed for people sending you messages no matter what the content. It’s what you then do with the content that you are responsible for.
 
Um, what? That's what I've been talking about the whole time. It's assumed we're talking about iCloud for photos since this is all about . . . you know, PHOTOS being scanned. smh... there's this thing called "context" in conversations. The on-device scanning is part and parcel to using iCloud for photos, so people who currently are using iCloud for photos do NOT have the option to "opt out of" or "opt into" the scan. Saying it's "optional" because you can just turn off the Photo upload is sort of pointless, because then they don't get to use the service at all - thus proving it's NOT optional - it's mandatory in order to use the service. That's not be trying to avoid being wrong; that's just facts 🤷‍♂️
Using the service is optional. Using the service and not having your photos scanned is not optional.
 
Short answer; yes. I can say more, but I’m out on mobile and am generally getting on with my life. When I get back on a full size keyboard I might add something.
 
Have you ever had child abuse images sent to you as a joke or your Apple account breached? It is technically possible, no doubt, but what are the actual chances?
Not yet, but in October of 2020, I was getting 30+ unsolicited text messages a week for political crap with no way to stop it. With iOS 15 combining received pictures with your photo library if I understood the keynote correctly, it’s possible that received photos end up automatically being saved to iCloud as part of the shared with you feature. Granted, that should be traced back to the sender, but just being involved can turn someone’s life upside down.

If someone gains access to my email, they’d be one step closer to getting my Apple ID. Or like I said earlier, all it takes is for someone to snatch my phone, wave it in front of my face and run off with it, and now they have access to my account because Face ID unlocked the phone for them (if I had Face ID enabled).
 
If I would be that concerned about it would probably switch or take more thorough measures. Haven’t looked it up but I assume iCloud photos could still be scanned while in the cloud if I don’t update? I would be too suspicious about the whole thing. However, who is to say Google won’t do the same? Might throw away modern tech all together. Or I’ll just keep my phone and update to iOS 15.
 
The amount of hyperbole, nonsense, miss-information and non-issues in this thread with regard to CSAM is startling IMO.

I don’t know if this is down to Apple not explaining what they are doing with this feature (I do believe they are at least partly to blame), or people can’t be bothered reading up and just instantly get on their high horses, jumping on social media and saying Apple are the devil and we’re not upgrading and they are crap and they deserve their reputation being tarnished forever blah blah blah. Crazy, just crazy!

I have read this entire thread, and I don’t think I have read a genuine case or example given where people have actually raised a 100% valid point (I may absolutely be wrong, so please point me in the right direction if I’ve missed it). What is there to be scared of? No one is seeing your pictures. No one is seeing your documents. And in the massively unlikely event of a false-positive being raised and then being sent to the relevant authorities, when a human being does actually see the picture they will realise there is nothing to be concerned about. What is the problem?

I might be missing something here, I just don’t see what all the exclaiming is about. Nobody seemed to bat an eyelid when Google introduced it on their systems years ago. And I know that they don’t have the same reputation for privacy that Apple strive for and are proponents of, but it’s like some people here think Apple are going to be going through, personally, every single one of your pictures and documents and scrutinising it.
The secret blacklist and the fact that now our devices actively snitch on us are what scares me. Which images are disallowed? We’re not allowed to know. The system is designed so that the phone or iPad has no way to know which images were flagged, and with the noise feature of the system (basically your device generates a random number of false-positives to try and push you over the threshold), Apple’s servers are going to be constantly trying to decrypt any positive matches you may have.

Take a screenshot of a politically charged ad or Facebook post? That may be on the blacklist. What about a picture of the US flag? That might be bad, too.

I started to trust Apple because I could verify their security claims, like actually testing Safari’s fingerprinting protections against other browsers and Safari coming out way ahead. But now that they put a system in that basically states they don’t trust me, I’m no longer trusting them to hold my photo library. Just as well anyway as this will save me $120/year in dropped iCloud costs.

(On a side note, who wants to bet that the photo cable sync options on Mac also disappear in the near future?)
 
... And in the massively unlikely event of a false-positive being raised and then being sent to the relevant authorities, when a human being does actually see the picture they will realise there is nothing to be concerned about. What is the problem?
Apple is anything else but a "relevant authority". It is an enterprise who makes soft- and hardware. Apple is not in the position to judge, whether one of my private picture is ok or not. I am the relevant authority for my 100% legit pictures. No one has to judge about them but me.
If i do publish them somewhere, may any "relevant authority" judge them, but not on my private device.

... I generally agree with you. If Apple wants to gawp at pictures of me - do it, but it starts at the point, where they start to control EVERYONE by default, because there might be something illicit.

Autorities (not Apple) need to find other ways to fight these images and the people behind it.
 
If I would be that concerned about it would probably switch or take more thorough measures. Haven’t looked it up but I assume iCloud photos could still be scanned while in the cloud if I don’t update? I would be too suspicious about the whole thing. However, who is to say Google won’t do the same? Might throw away modern tech all together. Or I’ll just keep my phone and update to iOS 15.
I’m 100% against going back to modern Android. I’ll ditch technology before that happens. I’m certainly also removing my entire library from iCloud, and won’t update to iOS 15 for as long as I possibly can. I also foresee the day when an update comes along so that the phones will scan and report images without iCloud. It wouldn’t be the first time a controversial feature was rolled out in two stages.

When contact tracing was first announced, many people justified it by saying that it was only an API and other apps had to be built to use them, entirely missing the sentence that eventually it would be a fully functional system on its own. And when folks like me pointed that out, we were dismissed by people apologetic to Apple. Well, iOS 13.7 came and went, which enabled full functionality without an app, so now it only takes an update to switch that on without the user knowing, whereas before one had to manually download a government app to make it work. Buying the iPhone 12 is what forced me into iOS 14 as my iPhone X made it nearly impossible to talk to people with the garbage mic software on it.
 
Not yet, but in October of 2020, I was getting 30+ unsolicited text messages a week for political crap with no way to stop it. With iOS 15 combining received pictures with your photo library if I understood the keynote correctly, it’s possible that received photos end up automatically being saved to iCloud as part of the shared with you feature. Granted, that should be traced back to the sender, but just being involved can turn someone’s life upside down.

If someone gains access to my email, they’d be one step closer to getting my Apple ID. Or like I said earlier, all it takes is for someone to snatch my phone, wave it in front of my face and run off with it, and now they have access to my account because Face ID unlocked the phone for them (if I had Face ID enabled).

I have 2FA on the email account attached to my AppleID, and the moment I realise my phone has been stolen I’ll simply do what I’d do if my bank card gets stolen. In the case of my phone, even if they wave it front of my face and run off with it, they’d still need to wave it again/enter a password to access my bank account and the like, but by the time they even try anything else I’l have most likely already logged into iCloud elsewhere and wiped the phone.
 
  • Like
Reactions: Nightfury326
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.