Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Will you leave the Apple ecosystem because of CSAM?


  • Total voters
    1,040
Status
Not open for further replies.
0413D157-E1BB-43B9-8B8A-48EA565D7A9C.png

Did anyone see this? Looks like there are 2 checks that are done BEFORE human review. The first check is done on device, then the second check is done with a different perceptual hash to make sure it’s definitely CSAM, and then it goes to human review (after 30 images of course)

So, it would be very tricky to try and trick the system with random images even if you definitely knew one of the CSAM hashes and generated a non CSAM image with the same hash.

 
Last edited:
  • Angry
  • Like
Reactions: artfossil and dk001
Yeah, I need to see confirmation before I buy into this 100%
Yeah, I can’t find anything about Asuhariet Ygvar before this post, so it isn’t like Bruce Schneier saying this.

This 9-to-5 article goes into more detail, and like you, is waiting for the security experts who should start weighing in on this code soon.

 
  • Like
Reactions: dk001 and jseymour
View attachment 1820243

Did anyone see this? Looks like there are 2 checks that are done BEFORE human review. The first check is done on device, then the second check is done with a different perceptual hash to make sure it’s definitely CSAM, and then it goes to human review (after 30 images of course)

it would be very tricky to try and trick the system with random images even if you definitely knew one of the CSAM hashes and generated a non CSAM image with the same hash.

Sohttps://www.apple.com/child-safety/pdf/Security_Threat_Model_Review_of_Apple_Child_Safety_Features.pdf

The problem with your attempts to “prove” all is unicorns and rainbows in the Apple world on this is to pick a snippet and hold it up as the SBF (Silver Bullet of Proof).

When you step back and look at the whole pie, Apple has done some great work however there are some points that are ripe for abuse especially by Government. This, based on Government past practice makes abuse not an ”If” rather a “When”.

Let’s say this goes live with iOS15 or 15.x. Will we be in any danger or under Government surveillance from this? Most likely not. Given time though we will.

CLient side checking has existed for quite a while in many forms. Ask yourself why no other device company has instituted this model.

End of the day you are either concerned or not.
 
@dk001: Indeed.

As noted elsewhere in this or related threads: Apple can make all the pious proclamations it wants, but, ultimately, it is in business to make money. More specifically: To make money for its investors. Once Apple embeds this capability in their devices it's only a matter of time before the government in one of their major markets tells them "You will include <this> in your on-device scanning capability and you will report hits to us, or you will be banned from the country." Apple may howl and gesture, but will ultimately fold like a cheap suit. The loss of a major market somewhere will not be a hill upon which Apple will choose to die. For if it did: As soon as their stock took the hit--and it would take a hit, the investors would be screaming bloody murder. Tim Cook and the rest would be out on their posteriors faster than you can say "Hey, Siri."

Once Country A has succeeded in strong-arming Apple into capitulation, the remainder will fall like dominoes.

You can always have a Huawei device related to the CCP.
You mean as opposed to an Apple device that is made in a country run by the CCP?
 
Last edited:
CSAM neuralHash found all the way back to iOS 14.3.

Some fascinating comments in that reddit thread. (N.B.: While I have some past experience in machine vision and machine learning [I once held a methods patent in a machine learning method and once worked in machine vision software design], this is far from my area of expertise, so I can't testify as to their veracity.)

TBH even if we can only generate blurry images it's more than enough to spam Apple with endless false positives, making the whole thing useless.
It's fun already today, someone matched the hash to a Beagle.
So, if the reddit thread is to be believed, here's what we now know:
  1. Apple claimed they were going to add this as of iOS 15 when, in reality, they've started adding it already
  2. Apple has claimed the hashing mechanism can so-rarely fail as to be a near-statistical impossibility, yet there may be evidence to suggest it's easily-spoofed
I'm withholding judgement on all this until I see something a bit more credible than posts from Random Reddit Posters, but, at first blush, this does not appear to bode well for Apple's credibility.
 
Alt+Tab, oh how I've missed you. Back on Windows 7 at the moment, readying for a move to a telemetry-disabled version of Windows 10. iCloud drive has been emptied, iCloud Photos is next (syncing has been disabled since the announcement), and will soon be free of cloud services.

Now, about that pesky phone...
 
20 years ago govermments told us there are so many terrorists we must sniff and control the worldwide web...
Today apple told us there must scan our devices there are so many child abusers under the iphone users...
sounds familiar and is totaly sick..... (my opionion)
Tim Cook getting old.
 
20 years ago govermments told us there are so many terrorists we must sniff and control the worldwide web...
Today apple told us there must scan our devices there are so many child abusers under the iphone users...
sounds familiar and is totaly sick..... (my opionion)
Tim Cook getting old.
Can we say “Steve Jobs would never do this!”? I feel like for once, this saying may actually be applicable. I can’t imagine Steve Jobs using his patented RDF to sell his customers on this feature.
 
iCloud drive has been emptied, iCloud Photos is next (syncing has been disabled since the announcement), and will soon be free of cloud services.
I'm down to four images left on iCloud. Haven't decided whether to save them or just delete them. Then iCloud photo sharing will be disabled.

Now, about that pesky phone...
Closest thing I've found to my previously-beloved iPhone SE (2020) is a Pixel 5a. Not water-resistant (in this day-and-age?) and doesn't have wireless charging, though :(

Haven't looked into replacement tablets yet. I've already resuscitated my wife's old Lenovo tablet. I may just use that until I identify something suitable. (I just realized something else I'll get back, moving back to Android: An actual working, fully-featured SSH client.)

I'll shortly be on my way to purchase a Garmin watch to replace my Apple Watch.

(Btw: I can't help but notice the "I trust Apple" contingent has been curiously silent so far today.)
 
Can we say “Steve Jobs would never do this!”? I feel like for once, this saying may actually be applicable. I can’t imagine Steve Jobs using his patented RDF to sell his customers on this feature.
Tim Cook has never been anything but a toll collector, riding the momentum, catering to the fake social crusade du jour. He's been telling people who don't tow the woke orthodox line that they shouldn't buy Apple products from the very beginning of his undeserved position.
 
I'm down to four images left on iCloud. Haven't decided whether to save them or just delete them. Then iCloud photo sharing will be disabled.


Closest thing I've found to my previously-beloved iPhone SE (2020) is a Pixel 5a. Not water-resistant (in this day-and-age?) and doesn't have wireless charging, though :(

Haven't looked into replacement tablets yet. I've already resuscitated my wife's old Lenovo tablet. I may just use that until I identify something suitable. (I just realized something else I'll get back, moving back to Android: An actual working, fully-featured SSH client.)

I'll shortly be on my way to purchase a Garmin watch to replace my Apple Watch.

(Btw: I can't help but notice the "I trust Apple" contingent has been curiously silent so far today.)
Tablets I have, both Android and Windows as well as the iPad. Only watch I could stand wearing is the Apple Watch, but it's rather freeing without it.

The phone is the tricky bit for me. I sold off the only good Android phone I had, so my next option is a Motorola Photon Q slider phone running a Google-free LineageOS. It's only good for communications, as it's too underpowered for music or web browsing. That Punkt phone referenced earlier might be my go-to.
 
Last edited:
CSAM neuralHash found all the way back to iOS 14.3.

I wonder if they'll find it also present in earlier versions of macOS, instead of just in Monterey. At this point, I assume they will.

My original plan was to replace my current iPhone SE (1st gen) after the new phones come out this fall, and then replace my Macbook Pro (2015) when the next version of the Mac Mini comes out, perhaps next year.

After the CSAM announcement, my initial reaction was to buy an iPhone SE (2nd) and Mac Mini now, before the iOS and macOS versions which contain the scanning are released, and just never upgrade the software. That would give me time to test if I can move to a Linux machine, and then in the future decide on what non-Apple phone to switch to (fortunately I never used iCloud for anything other than Find My Phone).

With finding that the code is already present in current OS versions, I might still get that new SE (current SE's touch id button is getting wonky), but forget that new Mac Mini. Fortunately my Macbook Pro is still on Mojave, since I liked iTunes and didn't upgrade it.

My iPod Classic still works. I might just get a flip phone and be done with it.
 
  • Like
Reactions: Violet_Antelope
And this is my line. I don't care what the reason is, I don't want to be treated (by default) like the most disgusting people on earth, and I've done nothing to warrant it.
So you're okay with being treated like the scum of the earth when uploading your files to iCloud then? As they've outlined, no scanning is taking place until a file is actively being uploaded to THEIR server. At that point, they have every right to check it for CSAM. If you don't want that to happen, then you cannot use their service. It has also been in the Terms and Conditions for a while now and I'm sure you've agreed to it.
 
Apple has claimed the hashing mechanism can so-rarely fail as to be a near-statistical impossibility, yet there may be evidence to suggest it's easily-spoofed
There's 2 flaws to this line of thinking.

1. You would need the CSAM hash in order to create a non CSAM photo with the same hash. That would be a highly illegal hash to try to find.
2. There are 2 checks being done. The first check is for the NeuroHash on your device against the CSAM hash database on-device, if there's a match a "safety voucher" is created and sent to apple holding the key to unlock the photo. The photo is then passed through another check on Apple's own server via a perceptual hashing process that is completely different from the first pass that compares the visual makeup of the unencrypted photo to double check the match to weed out any non-CSAM false positives. This is all done before human eyes look at the photos. Once 30 CSAM images are passed through, then it's up for human review. This is all in their documentation.
 
So here is an interesting read (sorry... cross-posted... but this is IMPORTANT)

Many people have stated that Apple is required to do this, as a service provider.

Someone also linked to the actual law; 18USC2258A.

Here's an interesting part of this. 2258A, section (f)

(f) Protection of Privacy.-Nothing in this section shall be construed to require a provider to-

(1) monitor any user, subscriber, or customer of that provider;

(2) monitor the content of any communication of any person described in paragraph (1); or

(3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).

Now... read that again carefully. NOTHING in this section shall be construed to *require* a provider to...
... MONITOR ANY USER, SUBSCRIBER, OR CUSTOMER
... MONITOR THE CONTENT OF ANY COMMUNICATION...
... AFFIRMATIVELY SEARCH, SCREEN OR SCAN FOR FACTS OR CIRCUMSTANCES.


That being said, this is a CHOICE by Apple... and NOT A REQUIREMENT. In fact, the law specifically says that they are NOT REQUIRED to scan, monitor or search for CSAM. Just to report it if it is discovered.
 
So here is an interesting read (sorry... cross-posted... but this is IMPORTANT)

Many people have stated that Apple is required to do this, as a service provider.

Someone also linked to the actual law; 18USC2258A.

Here's an interesting part of this. 2258A, section (f)

(f) Protection of Privacy.-Nothing in this section shall be construed to require a provider to-

(1) monitor any user, subscriber, or customer of that provider;

(2) monitor the content of any communication of any person described in paragraph (1); or

(3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).

Now... read that again carefully. NOTHING in this section shall be construed to *require* a provider to...
... MONITOR ANY USER, SUBSCRIBER, OR CUSTOMER
... MONITOR THE CONTENT OF ANY COMMUNICATION...
... AFFIRMATIVELY SEARCH, SCREEN OR SCAN FOR FACTS OR CIRCUMSTANCES.


That being said, this is a CHOICE by Apple... and NOT A REQUIREMENT. In fact, the law specifically says that they are NOT REQUIRED to scan, monitor or search for CSAM. Just to report it if it is discovered.
I wish Apple would just end to end encrypt everything that is related to iCloud. Then, it wouldn’t matter what’s on their server because there would be no way for them to know it’s contents.
 
I wish Apple would just end to end encrypt everything that is related to iCloud. Then, it wouldn’t matter what’s on their server because there would be no way for them to know it’s contents.
End the CSAM and do this and I return the Pixel on its way. But they won’t.
 
  • Like
Reactions: Pummers
I wish Apple would just end to end encrypt everything that is related to iCloud. Then, it wouldn’t matter what’s on their server because there would be no way for them to know it’s contents.
Scrap on-device NeuralHash AND E2EE iCloud would both be requirements. Scanning/reporting on-device before encryption nullifies many advantages of E2E encryption.
 
So you're okay with being treated like the scum of the earth when uploading your files to iCloud then? As they've outlined, no scanning is taking place until a file is actively being uploaded to THEIR server. At that point, they have every right to check it for CSAM. If you don't want that to happen, then you cannot use their service. It has also been in the Terms and Conditions for a while now and I'm sure you've agreed to it.
As repeated multiple times, many of us EXPECT non-E2E encrypted cloud content to be scanned. Despite having my entire photo library in iCloud at the moment, I have 7 TB of other data that won't ever touch a cloud server. My photo library consists primarily of landscapes, cars, and buildings. That much I'm willing to share in return for respect for my data.

We draw the line at having any kind of scan and report functionality on-device for illegal content. Diagnostic features are irrelevant to the conversation as those are optional (as was the Siri recordings debacle. If people actually read the screens during out-of-box and after updates, it said outright that audio recordings are listened to by humans if enabled and can easily be left off). CSAM scanning/hashing/reporting is not optional without a loss of significant functionality, whether that be multi-device sync/backup or the use of the device altogether, and the fear is that it may not become optional at all if you want to use an Apple device even without iCloud (again, how contact tracing was initially "Just an API" to iOS 13.7 adding full standalone functionality). I would rather find alternatives while I have time before I'm forced into this, especially if those alternatives add a great deal of functionality.
 
As repeated multiple times, many of us EXPECT non-E2E encrypted cloud content to be scanned. Despite having my entire photo library in iCloud at the moment, I have 7 TB of other data that won't ever touch a cloud server. My photo library consists primarily of landscapes, cars, and buildings. That much I'm willing to share in return for respect for my data.

We draw the line at having any kind of scan and report functionality on-device for illegal content. Diagnostic features are irrelevant to the conversation as those are optional (as was the Siri recordings debacle. If people actually read the screens during out-of-box and after updates, it said outright that audio recordings are listened to by humans if enabled and can easily be left off). CSAM scanning/hashing/reporting is not optional without a loss of significant functionality, whether that be multi-device sync/backup or the use of the device altogether, and the fear is that it may not become optional at all if you want to use an Apple device even without iCloud (again, how contact tracing was initially "Just an API" to iOS 13.7 adding full standalone functionality). I would rather find alternatives while I have time before I'm forced into this, especially if those alternatives add a great deal of functionality.
By the way, AI photo scanning and categorizing has been done locally on your device for years now and that was cool. How do you know Apple hasn't been using any of that data they're gathering about your photos? You don't. The only thing we could do is trust them.

The fact that they're being very open about this process is very telling. It tells me that they've thought this through very much and they don't intend to use it maliciously.

But then again, I carry around a tiny beacon with me everywhere I go that knows my location at all times, so if they wanted to abuse my privacy, they've either been doing it all along and I haven't noticed or they could start to at any time. Either way, it's nothing they haven't or couldn't have done in the past.
 
LMAOOOO every other day they have to release some new information to counter all this new information popping up. Now all of a sudden they say there's a secondary hashing system that works in the cloud but uses a completely different technology than the NeuralHash they touted in the initial press release. No mention of this at all in the first press release or any subsequent "white paper" or "peer review paper"

And when someone demonstrates how easy it is to implement a preimage attack Apple has to come out and say "don't worry guys, we totally expected this! 1 in 1 trillion!!! 1 in 1 trillion!!!!!!!!!"

This entire thing is a joke. There's zero transparency about anything. Apple must reverse their decision to implement this garbage.

Yeah, I need to see confirmation before I buy into this 100%

It's time to buy into it 100% because Apple confirmed to The Verge and other journalists this morning that the findings are true but unsurprisingly they're doing what every corporation on the planet does which is claim they're "not worried" about anything. They then proceed to provide a new piece of information they haven't bothered to mention AT ALL until now which is that there is a secondary cloud-side scan taking place which uses a completely different technology that they won't share any details about with the public (remember when people ITT said "Apple are being transparent about it so that makes it better than [insert company here]").

Did they purposefully omit the existence of the secondary cloud scan to the public? Did they scramble to say they'll do a secondary scan after random devs exposed how weak the NeuralHash is? Why did none of the "peer review papers" they published mention this secondary scan? Can any of those peer review papers be trusted at all given this new information? Did Apple even give those peer reviewers all of the information necessary about the system?

"In a call with reporters regarding the new findings, Apple said its CSAM-scanning system had been built with collisions in mind, given the known limitations of perceptual hashing algorithms. In particular, the company emphasized a secondary server-side hashing algorithm, separate from NeuralHash, the specifics of which are not public."
 
LMAOOOO every other day they have to release some new information to counter all this new information popping up. Now all of a sudden they say there's a secondary hashing system that works in the cloud but uses a completely different technology than the NeuralHash they touted in the initial press release. No mention of this at all in the first press release or any subsequent "white paper" or "peer review paper"
Are you talking about this article that they posted last week with a last modified date of 8-13? https://www.apple.com/child-safety/...del_Review_of_Apple_Child_Safety_Features.pdf
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.