Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Will you leave the Apple ecosystem because of CSAM?


  • Total voters
    1,040
Status
Not open for further replies.
1. It's not a scandal. They announced it publicly, and have now gone into significant detail on the implementation. The majority of the "backlash" (such as it is) is down to people not understanding how it works, and to some pretty shoddy media reporting.
2. To answer your question, no. Obviously not.
The issue really isn't about what is on Apple's own servers, which they have the full right to do. While there can be hashes rogue states may decide to implement, there is still the issue about the iMessage on children's accounts. How is that going to work? Someone takes a picture of the beach with two sandcastles and all of a sudden it gets triggered as sending sexual content? The scanning will be battery intensive. And if it is happening in the cloud, then that is even more of a privacy problem than scanning known hashes.

People are in uproar about the wrong thing. The nanny feature is what is scary because it seems to be using AI to do it. And far as I know it is on device. While I'm an adult, does this mean every single picture will now be scanned for training? I already heard laughable results with my beach story above, as the English police had many false positives with that alone. Sand. Deserts.
 
  • Like
Reactions: russell_314
The issue really isn't about what is on Apple's own servers, which they have the full right to do. While there can be hashes rogue states may decide to implement, there is still the issue about the iMessage on children's accounts. How is that going to work? Someone takes a picture of the beach with two sandcastles and all of a sudden it gets triggered as sending sexual content? The scanning will be battery intensive. And if it is happening in the cloud, then that is even more of a privacy problem than scanning known hashes.

People are in uproar about the wrong thing. The nanny feature is what is scary because it seems to be using AI to do it. And far as I know it is on device. While I'm an adult, does this mean every single picture will now be scanned for training? I already heard laughable results with my beach story above, as the English police had many false positives with that alone. Sand. Deserts.
From what I’ve heard it checks a hash photos on your iPhone against known CSAM images from a government database. It doesn’t have any kind of AI to check for nudity or children.

Now Apple has the capability to scan images on your phone against this database but what else they’re scanning for yeah only Apple knows that. There is no system or notification informing you of what was scanned on your phone or when so technically Apple could be scanning all your images and messages and comparing them to who knows what. They have the capability at least. You won’t know if they do anything. That’s the kind of creepy factor of this

Edited this due to the terrible voice to text of iPhone
 
Microsoft is NOT doing this on the device. It is only doing it in the cloud. It is fundamentally different. Heck, we don't even really know that Google is doing anything of this sort directly on the device...but even if they are, then we are at a standstill, and I will choose the alternative.

It's not fundamentally different. They are both computers. If something is going to be scanned, where it's done doesn't matter since the result will be the same providing they use the same algorithms.

Look at anti-virus software. They have been on-device since the mid-eighties and continue to be. Windows and macOS even have this software built-in to the system.

They contain hash databases of virus signatures which are downloaded from a third party, they can report back what files where matched, and they can even delete or move files.
 
  • Like
Reactions: Samdh90
The only legitimate concern I have is for the parents that have photos of their own kiddos on their phone. Sure, they won't be "known CSAM," at least in most cases but what would stop something like this being a false positive.
 
There is no scandal.

Backlash? Yes. Confusion? Without question, and this thread amongst all the other identical ones highlights that very fact. But there is no scandal.
It’s definitely somewhat of a scandal. Confusion aside, Craig contradicted hisself once or twice, and they’ve alluded to things that either can’t be true or aren’t quite possible. I can’t pinpoint specifics because I spent a few hours reading through all of this and honestly I can’t remember which exact thing I’m referencing.

I do agree that it isn’t quite full scandal level, but I’m a bit frustrated with the people annoyed/trivializing the people who are speaking out against it strongly. There are serious problems with this and it isn’t your typical “this or that” social media topic. There’s a lot of nuance and technical details, and this is a huge step. We have to handle this with care, and no one gets anything out of going “you guys are dumb for making a big deal out of this”
 
I'm taking a wait and see approach. I know I can be totally happy on Android (I was for years until the iPhone 12), but that doesn't really give any better privacy. So...we'll see what happens, what options the market provides, etc.

Honestly, I think this will end up in court sooner rather than later, because Apple is running a utility on your phone that many people don't want, but aren't given a choice. If they could freely install other OS's or versions that gave them more freedom, then there wouldn't be a case, but Apple is now going to push this on new and existing iPhone users, utilizing the hardware bought and paid for by the end user, without any regard for their desire to control what happens on their own phone. Time will tell how well that goes, but it could be a good catalyst for updating our existing laws to better apply in our digital world (most case law on this stuff is archaic and not written with a knowledge of the world we currently live in - including laws about search and seizure based on the 4th Amendment, among others).
 
The only legitimate concern I have is for the parents that have photos of their own kiddos on their phone. Sure, they won't be "known CSAM," at least in most cases but what would stop something like this being a false positive.
Apple could easily avoid such worries by blindly trusting photos coming from the iPhone's own camera, letting them into iCloud without the CSAM check. Apple allegedly only wants to find copies of the known CSAM pictures from NCMEC, and photos that were just created via the camera cannot be such pictures. Not only would this alleviate many privacy concerns - picture you take yourself will never be checked - it would also reduce the load on the iPhone and minimize false positives. I wonder why Apple does not do that. Almost as if they are interested in more than just the database pictures after all.
 
  • Like
Reactions: eltoslightfoot
Pray tell, which super private phones are you going to get? The ones with an OS made by a company which literally exists to capture and sell your data?


Yeah we do 😂


just delete your child porn and you’ll be fine.

This is hilarious. I love how this is the thing that’s getting everyone worked up, as if you haven’t been giving away your browsing habits, online purchase decisions, search history, location data, dietary choices, and all other kinds of information to companies whether you knew it or not, for ten+ years. Now apple wants to make sure you’re not storing kiddie porn on iCloud by comparing hashes from photos on your phone with hashes from known child porn under very specific circumstances and it’s “I AM DONE WITH APPLE.”

If only you all would approach actual societal problems with this same level of determination and conviction.
You clearly haven’t looked fully into the subject. It isn’t the CSAM part that’s a problem. But keep trivializing for internet points.
 
It's not fundamentally different. They are both computers. If something is going to be scanned, where it's done doesn't matter since the result will be the same providing they use the same algorithms.

Look at anti-virus software. They have been on-device since the mid-eighties and continue to be. Windows and macOS even have this software built-in to the system.

They contain hash databases of virus signatures which are downloaded from a third party, they can report back what files where matched, and they can even delete or move files.
It is absolutely fundamentally different. First, the antivirus programs are plain and obvious and voluntary. I can choose to uninstall and/or disable them. Second, Microsoft is not scanning my device. They are scanning the cloud. That is wildly different. What apple has done is provide proof of concept for any government that wants to surveil devices in real time. And because it is a private company that we have all agreed to the Terms of Service for, it isn't even currently against the 4th amendment.
 
So you're saying a person is going through my personal files and looking at them? Hmmm, I can't find that anywhere in the articles and documentation.
I'll try one more time, but, if we still can't reach common understanding--I'm not saying agreement, but understanding, we're going to have to call it quits.

Problem the first: It is not Apple's responsibility to police my use of my private property. And make no mistake: If I've purchased it from them, I regard it as my property. Regardless of any perceived moral imperative: It is not Apple's responsibility to police my use of my iThings any more than it's Stellantis' responsibility to police my use of my Jeep, Benchmade's responsibility to police my use of the pocket knife I bought from them, or Panasonic's responsibility to police my use of the camera I purchased from them.

Problem the second: Nobody has the right to impose upon me what they believe to be their social imperatives. Not ever. I am responsible for my behavior. I expect to be held accountable for my behavior. I am not responsible for your behavior, nor you for mine. If Apple believes it has some imperative to combat child sexual abuse they're more than welcome to have it it, but I will not tolerate them imposing that imperative on me.

Problem the third (this is the biggie): Apple is adding to iThings technology that enables, some might say invites, abuse. History has shown us that, if technology can be abused, it will be abused. This is an incontrovertible fact.

As to your instant question: No, "a person" won't necessarily be "going through [your] personal files," but an "AI" will. And if that AI "thinks" it sees a problem, a person might be examining your files. (Only photos/videos for now.) All-in-all: A distinction without a difference in my view.
 
I'll try one more time, but, if we still can't reach common understanding--I'm not saying agreement, but understanding, we're going to have to call it quits.

Problem the first: It is not Apple's responsibility to police my use of my private property. And make no mistake: If I've purchased it from them, I regard it as my property. Regardless of any perceived moral imperative: It is not Apple's responsibility to police my use of my iThings any more than it's Stellantis' responsibility to police my use of my Jeep, Benchmade's responsibility to police my use of the pocket knife I bought from them, or Panasonic's responsibility to police my use of the camera I purchased from them.

Problem the second: Nobody has the right to impose upon me what they believe to be their social imperatives. Not ever. I am responsible for my behavior. I expect to be held accountable for my behavior. I am not responsible for your behavior, nor you for mine. If Apple believes it has some imperative to combat child sexual abuse they're more than welcome to have it it, but I will not tolerate them imposing that imperative on me.

Problem the third (this is the biggie): Apple is adding to iThings technology that enables, some might say invites, abuse. History has shown us that, if technology can be abused, it will be abused. This is an incontrovertible fact.

As to your instant question: No, "a person" won't necessarily be "going through [your] personal files," but an "AI" will. And if that AI "thinks" it sees a problem, a person might be examining your files. (Only photos/videos for now.) All-in-all: A distinction without a difference in my view.
100 million photos ---- 3 false positives. That's impressive. Furthermore, what are the odds those 3 photos came from the same account? Then, what are the odds you'll have 30 false positives all in the same account. 1 trillion to 1.

So no, nothing about my photo library will ever touch the retinas of any Apple employee (not that I would really care if they did).

You can speculate about abuse all day, but that doesn't make it a true statement that it definitely will be abused.
 
  • Like
Reactions: JBGoode
Apple could easily avoid such worries by blindly trusting photos coming from the iPhone's own camera, letting them into iCloud without the CSAM check. Apple allegedly only wants to find copies of the known CSAM pictures from NCMEC, and photos that were just created via the camera cannot be such pictures. Not only would this alleviate many privacy concerns - picture you take yourself will never be checked - it would also reduce the load on the iPhone and minimize false positives. I wonder why Apple does not do that. Almost as if they are interested in more than just the database pictures after all.
Even IF you took all the nudes you wanted, they would never match the database, let alone 30 of them matching.
 
I'll try one more time, but, if we still can't reach common understanding--I'm not saying agreement, but understanding, we're going to have to call it quits.

Problem the first: It is not Apple's responsibility to police my use of my private property. And make no mistake: If I've purchased it from them, I regard it as my property. Regardless of any perceived moral imperative: It is not Apple's responsibility to police my use of my iThings any more than it's Stellantis' responsibility to police my use of my Jeep, Benchmade's responsibility to police my use of the pocket knife I bought from them, or Panasonic's responsibility to police my use of the camera I purchased from them.

Problem the second: Nobody has the right to impose upon me what they believe to be their social imperatives. Not ever. I am responsible for my behavior. I expect to be held accountable for my behavior. I am not responsible for your behavior, nor you for mine. If Apple believes it has some imperative to combat child sexual abuse they're more than welcome to have it it, but I will not tolerate them imposing that imperative on me.

Problem the third (this is the biggie): Apple is adding to iThings technology that enables, some might say invites, abuse. History has shown us that, if technology can be abused, it will be abused. This is an incontrovertible fact.

As to your instant question: No, "a person" won't necessarily be "going through [your] personal files," but an "AI" will. And if that AI "thinks" it sees a problem, a person might be examining your files. (Only photos/videos for now.) All-in-all: A distinction without a difference in my view.
I unliked this just so I could like it again. Then I unliked that and chose the heart instead. :) I agree, needless to say.
 
  • Haha
Reactions: jseymour
100 million photos ---- 3 false positives. That's impressive. Furthermore, what are the odds those 3 photos came from the same account? Then, what are the odds you'll have 30 false positives all in the same account. 1 trillion to 1.

So no, nothing about my photo library will ever touch the retinas of any Apple employee (not that I would really care if they did).

You can speculate about abuse all day, but that doesn't make it a true statement that it definitely will be abused.
And, of course, we are just supposed to believe Apple on this, and even if we did, it was done without a material understanding of how it changed the social contract between Apple and me (I purchased a computer--not a computer with free spyware level scanning attached).
 
  • Like
Reactions: pdoherty
And, of course, we are just supposed to believe Apple on this, and even if we did, it was done without a material understanding of how it changed the social contract between Apple and me (I purchased a computer--not a computer with free spyware level scanning attached).
You’d rather have the spyware in the cloud then?
 
Problem the first: It is not Apple's responsibility to police my use of my private property. And make no mistake: If I've purchased it from them, I regard it as my property. Regardless of any perceived moral imperative: It is not Apple's responsibility to police my use of my iThings any more than it's Stellantis' responsibility to police my use of my Jeep, Benchmade's responsibility to police my use of the pocket knife I bought from them, or Panasonic's responsibility to police my use of the camera I purchased from them.
Tbf, you lost the argument there. Yes, you are paying for the hardware... but the OS is their property. Once you hit agree on the startup page.. they have the right to do whatever they want (technically). We all choose to ignore that fact.
 
It's not fundamentally different. They are both computers. If something is going to be scanned, where it's done doesn't matter since the result will be the same providing they use the same algorithms.
What? Just so I’m not confused here you’re saying a corporation scanning servers owned by them for material like CSAM is the same thing as remotely scanning your device for the same materials?

I just quickly read your reply and didn’t look too closely at the post to reply to so maybe I misunderstood. My issue isn’t with them specifically scanning for CSAM but having the capability to stealthily scan my device for anything. Just because they’re scanning for CSAM now doesn’t mean they don’t have the capability already built in to stand for other things. That’s the scary part.

As I said before I’m not going to stop using Apple products because that would be kind of silly but it drives home that nothing on your iPhone is private. I hope this enables them to catch some bad people but I just worry what else it’ll be used for
 
  • Like
Reactions: Populus and Samdh90
Tbf, you lost the argument there. Yes, you are paying for the hardware... but the OS is their property. Once you hit agree on the startup page.. they have the right to do whatever they want (technically). We all choose to ignore that fact.
I’m sure they have the legal right to do what they’re doing. Apple probably pays their lawyers millions of dollars a year so they’re not doing anything illegal. Whether it’s ethically right for a company to enable a backdoor into their devices after constantly boasting about their privacy features is another story. This just goes to reinforce my belief that Apple is like any other corporation and they’re number one priority is making a profit. I’m sure multiple governments have been pressuring Apple for this so they felt it was possibly going to cause them legal problems
 
Tbf, you lost the argument there. Yes, you are paying for the hardware... but the OS is their property. Once you hit agree on the startup page.. they have the right to do whatever they want (technically). We all choose to ignore that fact.
Right, but they have to accept the consequences of that change. We aren't arguing they don't have the legal and technical ability. We are arguing it changed the way we perceive the relationship between an apple device and a customer.
 
Abslutely! I have windows machines. I don't use OneDrive. They can't scan my files in the cloud since the aren't there. Same with Apple.
Exactly. I don’t know how anyone can compare a company that offers a cloud service scanning your documents versus your documents being remotely scanned on your device. One is you putting your documents on someone else’s hardware vs your documents on your own hardware
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.