Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

I7guy

macrumors Nehalem
Nov 30, 2013
34,353
24,097
Gotta be in it to win it
As one of the opponents to this whole idea, it wasn't about just getting a backdoor, it was about using your device and it's capabilities to spy on you. The backdoor was just what we worried about what comes next. You change the hash database and it could look for anything of yours.
Sure, in a world of endless possibilities I agree.
 

Mercury7

macrumors 6502a
Oct 31, 2007
738
556
Pretty sure most of us did not have issues with Apple scanning iCloud…. The issue was scanning our devices… no matter how secure or intent it was just a bridge too far…. I’m still perfectly fine with them scanning anything I upload to the cloud. I think people still confuse what was at issue
 
  • Love
Reactions: bobcomer

Analog Kid

macrumors G3
Mar 4, 2003
9,061
11,859
Apple was most definitely going for semantics:
"The embedding network represents images as real-valued vectors and ensures that perceptually and
semantically similar images have close descriptors in the sense of angular distance or cosine similarity.
Perceptually and semantically different images have descriptors farther apart, which results in larger
angular distances."


The difference is not that clear-cut. The system extracts features from the image, and based on these features a neural network produces an image descriptor. If this descriptor is sufficiently similar to the image descriptor of a known CSAM image, the image will be flagged. Now yes, I understand that this type of system relies on existing images and is not capable of finding entirely new types of CSAM. But NCMEC was to provide its five million image hashes, that is a lot of images for a subject matter, and if you then go for similarity matching rather than exact matching, you have for all intents and purposes a CSAM classifier.
Training matters.

Semantic classifiers are given many examples of cats, trained to semantically identify cats in general and to distinguish them from other things, and then are asked to infer from a new image if it is a cat. Typically it is trained on multiple classes (cat, dog, house, car) and upon inference returns a confidence against each class. This is probably how "safe search" image filters are implemented-- trained to find stuff that looks like a general definition of porn.

That is not how the Apple NeuralHash is trained. It is trained to detect a specific image under perceptually invariant transforms, not a class of related but distinct images. It is not detecting CSAM, it is detecting specific instances of CSAM. It is trained to distinguish from examples that are not those instances of CSAM to prevent it from, for an over simplified example, declaring that an image with a lot of flesh tone is CSAM.


They are not perfectly reversible, that is true. But you can recreate a recognizable approximation of the original image. See: https://towardsdatascience.com/black-box-attacks-on-perceptual-image-hashes-with-gans-cc1be11f277

This has been shown to work with PhotoDNA, a perceptual image hashing system widely used for CSAM detection. Maybe Apple's system would have been immune to this, but they spent quite some effort to prevent people from even trying, so I have my doubts.

No, you can't.

Here's an input image from that link:
1670537898964.png


Here's the image "reversed" from the Domino's hash:
1670537951837.png


That doesn't look like anything I'd be worried about.



A few things: first your link doesn't use PhotoDNA, it uses the Python imagehash library which has not been designed to resist any attacks such as you're discussing.

Second, it's using a GAN to create another image that matches that hash, but not necessarily the image that created the hash. Specifically it's saying "create an image that is a face and matches this hash". Since the first examples are all faces filling the frame on the input, and the GAN is creating a face filling the frame, it's easy to freak out and say "it's reversible!". It's also easy to say "they even got the hair color right!", because the hash being used is the ahash which is based on the average color of a region.

The example I shared above is when the image is arbitrary as it would be in your photo library and you ask the GAN to "create an image that is a face and matches the hash". What you get is nothing like the input. You could do the same with a GAN trained to make pictures of cars that match a hash, or landscapes that match a hash-- all would look like something, but nothing like what the original image was. Then imagine the image was 8MP rather than 0.044MP and the variation that would lead to. This is an example of what I meant by "spoofing". Generating an image that matches the hash, but looks nothing like the image that created the hash.
 
Last edited:

laptech

Suspended
Apr 26, 2013
3,666
4,044
Earth
Just you wait, 6 months, a year, 2 years, 3 years? the police will crack a child sex gang and as part of their investigation into the gang the police report on 1000's upon 1000's of child sex images were found on Apple's icloud servers and parents/relatives/children's charities will be complaining at Apple as to why they didn't do anything or enough to prevent the spread of such images on their icloud servers and then someone will speak up and say 'well Apple planned to but it got canned due to public pressure over users right to privacy (quoting todays date).'
 

DeepIn2U

macrumors G5
May 30, 2002
12,909
6,910
Toronto, Ontario, Canada
They were ready to roll it out, but backed off when their consumer base set their brand on fire for combing through their private data.

You think they announce stuff like this without having every detail developed and implemented? They even replied with the chance of getting a false positive.

Seriously?

smh.

Doesn’t matter it was NOT rolled out.

Period.

End of discussion despite your hypothesis.

SMH, like get real.
 

DeepIn2U

macrumors G5
May 30, 2002
12,909
6,910
Toronto, Ontario, Canada
Yes. That’s all it takes to lose users trust. They have planted the seed of doubt in my and many other end users minds. I’m not going to forget and just blindly take them at their word. If you believe any corporation has your best interests in mind and you think they’re always telling the truth to users, get with the program. Apple probably realized how badly they eroded trust in their otherwise loyal base. All it takes is one slip up for people to lose trust permanently.
By that analogy in sure there are MANY products by direct or indirect association for mistrust of actions or statements that backpedaled that you’d probably question everything in your home, including the home itself and the mortgages you’ve had.

That kind of “mistrust” and action isn’t something I could live by. To me it’s actions cause words.

So yes when this is implemented outside of beta Apple will show by actions they stand by their worlds and correct mistakes as they’ve continually have year after year after year.

For all of the pundits against anything CSAM on these boards concerns Apple’s mentioning of it a year ago - NOBODY has complained what Google, Microsoft and others have actually done that all their users are affected by. So mistrust is poorly placed against Apple.

“If a man (or company) is guilty for what goes on in his (their) own mind, then give me the electric chair for all my future crimes” - Prince.

I apply this to myself, people I meet or my circle or to businesses.

Actions speak louder than words. My trust is in actions. My mistrust is also in actions as well not statements.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,061
11,859
As one of the opponents to this whole idea, it wasn't about just getting a backdoor, it was about using your device and it's capabilities to spy on you. The backdoor was just what we worried about what comes next. You change the hash database and it could look for anything of yours.

That first sentence gets to the point of what my reservations were. On the one hand, this seemed like a very narrow and customer friendly approach to limit government overreach. CSAM is very, very bad. We should find and prosecute offenders. We shouldn't let it be an excuse to root through a users file system looking for something else, and it should tell the government, Apple and everyone else absolutely nothing about the innocent. Apple pretty much made sure they accomplished that in a way that also enabled end to end encryption in the cloud.

In my mind, the "what will autocrats do" argument is just as over the top and irrelevant as saying we can't have encryption because we need to protect children. Authoritarians will either find a way to bend Apple to their will or will throw Apple out and support a bootlick competitor.

My discomfort is less grandiose. Sometimes I break the law by exceeding the posted speed limit. My phone knows I'm in my car because of Bluetooth or CarPlay (thus parked car waypoints), and it knows how fast I'm going. Should my phone notify law enforcement that I'm speeding?

It seems like a much less noble argument, but it's actually much more relevant. We all try to wriggle out of speeding tickets like it's a sport, but speeding is a crime. Traffic accidents affect a lot of people. How many other crimes like this could technology enforce?

If my technology witnesses a crime and is technically able to report it, should it be obliged to do so? Is demanding that my technology not snitch basically asking Apple to conspire with me on a crime?

This is a bit like the trolly problem in autonomous vehicles. Systems will eventually be smart enough to know the car can save the lives of 2 pedestrians by making a decision that puts the driver at risk, should they?

And then what I consider the kicker: once the public knows the car will make that decision, will anyone buy one?
 
Last edited:
  • Love
Reactions: Darth Tulhu

bobcomer

macrumors 601
May 18, 2015
4,949
3,693
That first sentence gets to the point of what my reservations were. On the one hand, this seemed like a very narrow and customer friendly approach to limit government overreach. CSAM is very, very bad. We should find and prosecute offenders. We shouldn't let it be an excuse to root through a users file system looking for something else, and it should tell the government, Apple and everyone else absolutely nothing about the innocent. Apple pretty much made sure they accomplished that in a way that also enabled end to end encryption in the cloud.

In my mind, the "what will autocrats do" argument is just as over the top and irrelevant as saying we can't have encryption because we need to protect children. Authoritarians will either find a way to bend Apple to their will or will throw Apple out and support a bootlick competitor.

My discomfort is less grandiose. Sometimes I break the law by exceeding the posted speed limit. My phone knows I'm in my car because of Bluetooth or CarPlay (thus parked car waypoints), and it knows how fast I'm going. Should my phone notify law enforcement that I'm speeding?

It seems like a much less noble argument, but it's actually much more relevant. We all try to wriggle out of speeding tickets like it's a sport, but speeding is a crime. Traffic accidents affect a lot of people. How many other crimes like this could technology enforce?

If my technology witnesses a crime and is technically able to report it, should it be obliged to do so? Is demanding that my technology not snitch basically asking Apple to conspire with me on a crime?

This is a bit like the trolly problem in autonomous vehicles. Systems will eventually be smart enough to know the car can save the lives of 2 pedestrians by making a decision that puts the driver at risk, should they?

And then what I consider the kicker: once the public knows the car will make that decision, will anyone buy one?
Interesting thoughts!
 

SnappleRumors

Suspended
Aug 22, 2022
394
515
You got nothing to hide!

If the government can stick its hand down my pants searching for bombs, it can snoop on people's cell phone picture library to make sure they're not diddling kids!!

You have nothing to hide! If you're not a kiddie molester, you have nothing to worry about!!!

The difference is consent, reasonable suspicion, and probably cause.
 

SnappleRumors

Suspended
Aug 22, 2022
394
515
This wasn’t exactly user feedback. The system was never put into place for people to even see how it worked and almost no one on here who didn’t like it understood how it worked and were often just quoting headlines and not he technology behind it.

That being said even though I was in support of the CSAM idea I also recognize that people need to feel comfortable with it and I need to accept the outcome. Apple likely could have done a much better job rolling this out and helping people better understand how it was designed to work. Not everyone was going to like it but in the end I feel it was a loss while others will feel like its a win.

CSAM is completely inconsistent with their recent advanced security deployment.
Apple realized long ago they were going that direction and knew it was impossible to justify craving out an CSAM exception.
 

TheToolGuide

macrumors regular
Aug 11, 2021
118
87
CSAM is completely inconsistent with their recent advanced security deployment.
Apple realized long ago they were going that direction and knew it was impossible to justify craving out an CSAM exception.
If you had read the underlying technology on what they were going to do you would realize your opinion doesn’t apply. This was on device scanning, it used ID tags compared to known illegal photos, those tags were tallied up before a warning was ever sent out, a threshold had to be met, a review ad to be made by a human to verify it wasn’t an error, the error rate was astronomically high, and then law enforcement would be involved once verified the tags matched known CSAM.

End-to-end encryption was never broken since its using tags on device. While files can be encrypted in the cloud Apple wasn’t scanning those files. They were using the ID tags (coupons I think they called them) and if the data was encrypted and needed to be reviewed, there is are solutions a company can create and evolve to address security without completely invading privacy.

Unlike many I took quite a bit of time to read up on the technology. IMO I felt it was quite secure. Thankfully I encourage discussion and understanding. The media, political pundits, and loudest internet voices all said they don’t trust or want it whether they took the time to understand it or not. My opinion was the majority did not even have a clue how it worked, just that it was a backdoor for conspiracies. However that is my opinion and I could likely be be in the minority.

At the end of the day it doesn’t matter what I think about any of it. I thought it was a good approach to addressing the issue with pedophilia around the world and it won’t be in place for various reasons and opinions. Yet know one has proposed a better solution and the problem is still growing and not shrinking.
 

Darth Tulhu

macrumors 68020
Apr 10, 2019
2,272
3,782
You got nothing to hide!

If the government can stick its hand down my pants searching for bombs, it can snoop on people's cell phone picture library to make sure they're not diddling kids!!

You have nothing to hide! If you're not a kiddie molester, you have nothing to worry about!!!
Except you don't know if the officer patting my young daughter down (because her phone was flagged) is a rapist or a kiddie molester himself.

You cannot see if someone is a racist, a rapist, a murderer, or a pedophile, unless they give it away themselves.

And therein lies the problem.

All societal-control systems are run by humans. And there is currently no way to look inside a human's heart and see their intentions.

So the laws must mitigate this. What is written MATTERS.
 

robbietop

Suspended
Jun 7, 2017
876
1,167
Good Ol' US of A
Except you don't know if the officer patting my young daughter down (because her phone was flagged) is a rapist or a kiddie molester himself.

You cannot see if someone is a racist, a rapist, a murderer, or a pedophile, unless they give it away themselves.

And therein lies the problem.

All societal-control systems are run by humans. And there is currently no way to look inside a human's heart and see their intentions.

So the laws must mitigate this. What is written MATTERS.
The officer patting your daughter down is only molesting her by government orders. She might be carrying a bomb, sir.
You've done nothing wrong, right? You're not hiding anything, right? So why can't the TSA diddle young girls going to the airport?
We must write laws that the government can see everything on your phone just in case your TSA agent and you are diddling your daughter together in the back of the airport Subway.
 

robbietop

Suspended
Jun 7, 2017
876
1,167
Good Ol' US of A
The difference is consent, reasonable suspicion, and probably cause.
I reasonably suspicion that TSA agents are government diddlers in disguise sent by Joe Biden and Jeffrey Epstein. It's all consent because you consented to having a government diddle children at the airport when they knocked the towers down and we all got really angry and forgot civil rights and allowed us all to be molested in line with our shoes off.
The shoes being off is the key to a good government molesting.
 

SnappleRumors

Suspended
Aug 22, 2022
394
515
I reasonably suspicion that TSA agents are government diddlers in disguise sent by Joe Biden and Jeffrey Epstein. It's all consent because you consented to having a government diddle children at the airport when they knocked the towers down and we all got really angry and forgot civil rights and allowed us all to be molested in line with our shoes off.
The shoes being off is the key to a good government molesting.

I’m gonna need to read this over a few times.
 

I7guy

macrumors Nehalem
Nov 30, 2013
34,353
24,097
Gotta be in it to win it
That first sentence gets to the point of what my reservations were. On the one hand, this seemed like a very narrow and customer friendly approach to limit government overreach. CSAM is very, very bad. We should find and prosecute offenders. We shouldn't let it be an excuse to root through a users file system looking for something else, and it should tell the government, Apple and everyone else absolutely nothing about the innocent. Apple pretty much made sure they accomplished that in a way that also enabled end to end encryption in the cloud.

In my mind, the "what will autocrats do" argument is just as over the top and irrelevant as saying we can't have encryption because we need to protect children. Authoritarians will either find a way to bend Apple to their will or will throw Apple out and support a bootlick competitor.

My discomfort is less grandiose. Sometimes I break the law by exceeding the posted speed limit. My phone knows I'm in my car because of Bluetooth or CarPlay (thus parked car waypoints), and it knows how fast I'm going. Should my phone notify law enforcement that I'm speeding?

It seems like a much less noble argument, but it's actually much more relevant. We all try to wriggle out of speeding tickets like it's a sport, but speeding is a crime. Traffic accidents affect a lot of people. How many other crimes like this could technology enforce?

If my technology witnesses a crime and is technically able to report it, should it be obliged to do so? Is demanding that my technology not snitch basically asking Apple to conspire with me on a crime?

This is a bit like the trolly problem in autonomous vehicles. Systems will eventually be smart enough to know the car can save the lives of 2 pedestrians by making a decision that puts the driver at risk, should they?

And then what I consider the kicker: once the public knows the car will make that decision, will anyone buy one?
CSAM and speeding are not equivalent. You could be caught speeding and the officer could let you off with a warning. Should people who have CSAM materials get let off with a warning? Traffic accidents from speeding are a part of society. The risk of death and injury is inherent in all that we do in our daily lives is always present.

CSAM in no circumstances is an acceptable part of society and we might have to balance privacy with tech to stop the interwebs from being an enabler or CSAM. (And sure there are clearly issues with using tech to catch CSAM as in that unfortunate incident of a parent sending pics of t heir child to the doctor using gmail, which is a clear breakdown in the systems and common sense)
 
  • Love
Reactions: compwiz1202

Analog Kid

macrumors G3
Mar 4, 2003
9,061
11,859
CSAM and speeding are not equivalent. You could be caught speeding and the officer could let you off with a warning. Should people who have CSAM materials get let off with a warning? Traffic accidents from speeding are a part of society. The risk of death and injury is inherent in all that we do in our daily lives is always present.

CSAM in no circumstances is an acceptable part of society and we might have to balance privacy with tech to stop the interwebs from being an enabler or CSAM. (And sure there are clearly issues with using tech to catch CSAM as in that unfortunate incident of a parent sending pics of t heir child to the doctor using gmail, which is a clear breakdown in the systems and common sense)

I think you're missing my point. I am explicitly saying the two aren't equivalent which is why it's interesting to think about.

The debate so far has been on hyperbolic arguments. Exploited children on one side, despots on the other. Holding up the one time a bad solution turned out good against the one time a good solution turned out bad.

As I said, I'm not bothered by going after and prosecuting child predators, I think Apple did this in a way that was private and secure, and I don't think it's an easy backdoor for dictators. I don't think the arguments against the CSAM scanning hold up. What I was addressing is why I'm still uncomfortable. I don't think think the question is "should your technology be allowed to spy on you doing horrible things", I think the question is "should it be allowed to spy on you at all."

The ethical arguments are nuanced here and that means shifting the perspective from hyperbolic to mundane. If an argument can be made that society benefits from technology enforcing laws, should it? If it does, how does that affect our relationship with and demand for the technology?

I started thinking about speed limits because they do hold such an ambivalent place in our minds. If we can't make a case for enforcing speed limits but can make a case for CSAM then that means there's a threshold somewhere in between. Where is it? How do we define it? How do we hold the line?

I'm not sure we can answer those questions and therefore hesitate to support implementing something like this even if we all agree the specific use is well on the justifiable side of the line.
 
Last edited:

VulchR

macrumors 68040
Jun 8, 2009
3,419
14,315
Scotland
I'm late to this thread, but this is good news. I am glad that Apple finally saw reason. If they had wanted to scan my photos on their servers after they've received a valid search warrant, then that was fine with me. Just don't install spying software on my property (my iPhone).

Now I can stop my consumer boycott of Apple, and I've gone back to my normal MR signature.
 

VulchR

macrumors 68040
Jun 8, 2009
3,419
14,315
Scotland
I think you're missing my point. I am explicitly saying the two aren't equivalent which is why it's interesting to think about.

The debate so far has been on hyperbolic arguments. Exploited children on one side, despots on the other. Holding up the one time a bad solution turned out good against the one time a good solution turned out bad.

...
The privacy concerns weren't hyperbole IMO. WHen the debate broke out, Apple foolishly published a roadmap (a technical document) of how to create such a system that could be followed by authoritarian governments. If a systems like this can detect photos of abused kids, then clones of the software could detect flags, memes, sections of text, spoken words, pictures from political meetings and protests, etc. All governments have to do is mandate that the spying software be installed on all mobile phones. And it is not like wiretapping because it uses the user's own phone CPU cycles and power to do the scanning, so tracking would no longer be through random sampling or targeted searches of phones, but universal surveillance. Sure, the immediate threat was small, but as a mentor once told me 'it's not f(x), but d(x)' (for those non-math folk - it's not how things are, but how things are changing that is important).
 
  • Like
Reactions: turbineseaplane

Analog Kid

macrumors G3
Mar 4, 2003
9,061
11,859
The privacy concerns weren't hyperbole IMO. WHen the debate broke out, Apple foolishly published a roadmap (a technical document) of how to create such a system that could be followed by authoritarian governments. If a systems like this can detect photos of abused kids, then clones of the software could detect flags, memes, sections of text, spoken words, pictures from political meetings and protests, etc. All governments have to do is mandate that the spying software be installed on all mobile phones. And it is not like wiretapping because it uses the user's own phone CPU cycles and power to do the scanning, so tracking would no longer be through random sampling or targeted searches of phones, but universal surveillance. Sure, the immediate threat was small, but as a mentor once told me 'it's not f(x), but d(x)' (for those non-math folk - it's not how things are, but how things are changing that is important).

I feel like people aren’t reading what I’m saying or I’m just really bad at saying it. I think I’ve repeated a few times now that I don’t think the concerns are hyperbole, but the arguments being made are. They are hyperbolic because they are trying to use extreme but narrow justifications to support or oppose solutions rather than looking at the bigger picture.

They also tend to misunderstand how Apple’s proposed technology, and authoritarian governments, work.

To save you having to scroll back on why I think the despot argument is weak:

A hash of known circulating CSAM provided by at least two child protection agencies operating under different governments. At least 30 matched known CSAM images must be detected before triggering an alert. Matches confirmed manually before notifying law enforcement.
What’s to stop them from insisting if Apple doesn’t deploy it? What’s to stop them from insisting it be implemented in a less narrow, transparent and secure way?

You’re making a slippery slope argument about regimes that wouldn’t hesitate to push someone off a cliff.
So now the question is what happens when one of those regimes you mention make end to end encryption on certain data illegal and use CSAM as the stalking horse. Apple has tried to address it in a way that undercuts the false pretext and prevents a fishing expedition but it blew back in their face. Now it's likely that they'll either be forced to play ball, or exit those markets and leave them to hollow companies who won't even make an effort to hold a line.
 
Last edited:
  • Like
Reactions: VulchR

Playfoot

macrumors 6502
Feb 2, 2009
283
255
Well, this is good news . . . However, it should be remembered the technology exists. And in this day and age, especially post 9/11 when the world surrendered what remaining rights to privacy existed, it is impossible to know if the technology is NOT in use.

Not one to succumb to conspiracy theories, yet, this is one of the few times Apple has back tracked. And I find it odd after all this time for Apple to "announce" its intent. Pandora's box has been opened.
 
  • Like
Reactions: VulchR

SactoGuy18

macrumors 601
Sep 11, 2006
4,427
1,562
Sacramento, CA USA
It really came down to this: some state actor hacker could modify Apple's CSAM scanning tools to look for subjects of a political nature. Given what's happening in China, no thank you.
 
  • Like
Reactions: DaPhox
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.