Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

TinaBelcher

macrumors 65816
Original poster
Jul 23, 2017
1,257
748
It’s a natural thing to want to capture the moment of your child’s birth, first bath at home, breastfeeding your child and ect… ok, so image not being able to take those images because your child, because it may appear nude in those images, out of fear that Apple will be able will scan them and label it as a crime. That upsets me. I remember I once was asked to photograph my sibling bathing my niece for the first after coming home, and yes she was nude, but it was an innocent bonding moment between her and her parents bathing her for the first time (the parents were dressed), and the fact that Apple can see the pics and call the cops on me and change me for being a “child predator” for something like that…. It makes me angry!!
I have lots of physical photos from my childhood where I was nude as a baby/child, and it was pure and innoncent.

So, what now? I’d like to hear from all the parents how they feel about this.
 
why does Apple always mess with or smuggle reverse their future policy and approach?
if they notice children on iCloud photos they will alert the authorities.
but what about them?
Screen Shot 2021-09-06 at 5.38.24 PM.png
this is an image promoting the MacBook air on their website now.
 
Apple is postponing CSAM, Link. And I think they are doing it because the next iPhone is around the corner and they don't want to hurt their sales and they know a lot of people are pissed about it.

However what a lot of people fail to realize (as I see most comments pertaining to their iPhone's) is it just wasn't coming to your iPhone, it was also coming to your iPad and Mac.

I am a parent as well, and for now you have nothing to worry about. But if you are like me, the trust factor with Apple now is 50/50.
 
why does Apple always mess with or smuggle reverse their future policy and approach?
if they notice children on iCloud photos they will alert the authorities.
but what about them?
View attachment 1827748this is an image promoting the MacBook air on their website now.
I don't understand what you're trying to show us? They're not looking for pictures of kids. They're looking for very illegal child porn that already exists out in the wild. You would know if you had it because it's not something that's easy to accidentally stumble upon (thank goodness).
 
  • Like
Reactions: I7guy
why does Apple always mess with or smuggle reverse their future policy and approach?
if they notice children on iCloud photos they will alert the authorities.
but what about them?
View attachment 1827748this is an image promoting the MacBook air on their website now.
Please stop spamming the forum with duplicate posts. The above have nothing to do with ops question or CSAM. (given not everything is known anyway) Do you really believe that photographs with children represent CSAM?
 
  • Like
Reactions: Homerpalooza
Please stop spamming the forum with duplicate posts. The above have nothing to do with ops question or CSAM. (given not everything is known anyway) Do you really believe that photographs with children represent CSAM?
Look at his avatar. Clearly he's trying to spread misinformation because he hates Apple.
 
  • Like
Reactions: Homerpalooza
the post was not hate spam just an observation of why apple would depict these children?
there is no beach, family setting and very lousy photo editing.
and was wondering if anyone else on this planet thought the same.

i am typing this on my MacBook air 2010 which is not supposed to work at all

and the reason why i was on the apple site was my work offered a free MacBook air and an affinity license.
i think i will deny the offer and stick to what i own.
and never log on back here again
sayonara!
 
The algorithm as-is (before delaying the roll out) is susceptible to poison attack since they don’t match pixel perfect image but feature based image. Therefore, someone could manufacture an image that triggers the warning that does nothing with child porn.

Dunno what will happen after the revision but there’s gonna be a way to reduce the chance of poison attack or work out another training entirely.
 
The algorithm as-is (before delaying the roll out) is susceptible to poison attack since they don’t match pixel perfect image but feature based image. Therefore, someone could manufacture an image that triggers the warning that does nothing with child porn.

Dunno what will happen after the revision but there’s gonna be a way to reduce the chance of poison attack or work out another training entirely.
That would be pointless because there's a second server-side hash that is completely different and it's meant to verify that the image is an actual match or just a false positive. That happens before human review.
 
  • Like
  • Angry
Reactions: sdz and hans1972
That would be pointless because there's a second server-side hash that is completely different and it's meant to verify that the image is an actual match or just a false positive. That happens before human review.
What? Client-side and server-side has two completely different sets of hash "meant to verify that the image is an actual match or just a false positive"? Or it's more like client-side hash is being compared against server side hash to help determine whether the said image is a matched CSAM or false positive? Even so, it doesn't prevent poison attacks.
 
  • Like
Reactions: sdz and dk001
What? Client-side and server-side has two completely different sets of hash "meant to verify that the image is an actual match or just a false positive"? Or it's more like client-side hash is being compared against server side hash to help determine whether the said image is a matched CSAM or false positive? Even so, it doesn't prevent poison attacks.
2 different sets of hashes. Even if someone were to force a collision (they will), it won't pass the second server-side check unless it's actually a real CSAM image. That's partly why this system is so secure. Also, the device itself has no idea if there's a match or not. It uploads an encrypted voucher to Apple where only then it's decoded and marked as a match or not. Even then, they need 30 of these vouchers before they can run them through the second server-side check. Then if those somehow get through, then it's up for human review to verify and notify the proper organization for further analysis.

I'd love to know how someone can trick this entire system with an image that looks nothing like a CSAM image.

Edit: From page 12 and 13: https://www.apple.com/child-safety/...del_Review_of_Apple_Child_Safety_Features.pdf

"Match voucher decryption by iCloud Photos servers Apple’s CSAM detection is a hybrid on-device/server pipeline. While the first phase of the NeuralHash matching process runs on device, its output – a set of safety vouchers – can only be interpreted by the second phase running on Apple's iCloud Photos servers, and only if a given account exceeds the threshold of matches. The local device does not know which images, if any, positively matched the encrypted CSAM database. iCloud Photos servers periodically perform the mathematical protocol to discover whether a given account exceeded the match threshold. With the initial match threshold chosen as described above, iCloud Photos servers learn nothing about any of the user's photos unless that user's iCloud Photos account exceeded the match threshold. This meets our data access restriction requirement.

To make sure Apple's servers do not have a count of matching images for users below the match threshold, the on-device matching process will, with a certain probability, replace a real safety voucher that's being generated with a synthetic voucher that only contains noise. This probability is calibrated to ensure the total number of synthetic vouchers is proportional to the match threshold. Crucially, these synthetic vouchers are a property of each account, not of the system as a whole. For accounts below the match threshold, only the user's device knows which vouchers are synthetic; Apple's servers do not and cannot determine this number, and therefore cannot count the number of true positive matches.

The code running on the device will never let Apple servers know the number of synthetic vouchers directly; this claim is subject to code inspection by security researchers like all other iOS device-side security claims. Only once an account exceeds the match threshold of true matches against the perceptual CSAM hash database can Apple servers decrypt the contents of the corresponding safety vouchers and obtain the exact number of true matches (always in excess of the match threshold) – and the visual derivatives that correspond to those vouchers. In other words, even though the creation of synthetic vouchers is a statistical protection mechanism, it is not a traditional noisebased approach: under this protocol, it is impossible for servers to distinguish synthetic vouchers from real ones unless the number of true positive (non-synthetic) matches cryptographically exceeds the match threshold.

Once the match threshold is exceeded, Apple servers can decrypt only the voucher contents that correspond to known CSAM images. The servers learn no information about any voucher that is not a positive match to the CSAM database. For vouchers that are a positive match, the servers do not receive a decryption key for their images, nor can they ask the device for a copy of the images. Instead, they can only access the contents of the positively-matching safety vouchers, which contain a visual derivative of the image, such as a low-resolution version. The claim that the safety vouchers generated on the device contain no other information is subject to code inspection by security researchers like all other iOS device-side security claims."

Also from page 13:
"Once Apple's iCloud Photos servers decrypt a set of positive match vouchers for an account that exceeded the match threshold, the visual derivatives of the positively matching images are referred for review by Apple. First, as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation."
 
Last edited:
why does Apple always mess with or smuggle reverse their future policy and approach?
if they notice children on iCloud photos they will alert the authorities.
but what about them?
View attachment 1827748this is an image promoting the MacBook air on their website now.

the post was not hate spam just an observation of why apple would depict these children?
there is no beach, family setting and very lousy photo editing.
and was wondering if anyone else on this planet thought the same.

i am typing this on my MacBook air 2010 which is not supposed to work at all

and the reason why i was on the apple site was my work offered a free MacBook air and an affinity license.
i think i will deny the offer and stick to what i own.
and never log on back here again
sayonara!

It sounds like you're intentionally misinterpreting what Apple's CSAM feature was.
Apple's CSAM feature (which only applies to iCloud Photos & iCloud Mail, and no other local or online service Apple offers) only flags KNOWN PHOTOS of Child Sexual Assault that has been reported by MULTIPLE child safety agencies to the National Center of Missing and Exploited Children.

Apple's not going to send you to the police because you have pictures of your newborn child and they don't have pants. Apple's not going to send you to prison because you work for NatGeo and you're editing a video where a child doesn't have pants on. Apple's CSAM feature wouldn't even trigger because again, it has to be KNOWN photos of child abuse REPORTED to those agencies.
 
  • Like
Reactions: Homerpalooza
2 different sets of hashes. Even if someone were to force a collision (they will), it won't pass the second server-side check unless it's actually a real CSAM image. That's partly why this system is so secure. Also, the device itself has no idea if there's a match or not. It uploads an encrypted voucher to Apple where only then it's decoded and marked as a match or not. Even then, they need 30 of these vouchers before they can run them through the second server-side check. Then if those somehow get through, then it's up for human review to verify and notify the proper organization for further analysis.
If this system is as secured as it sounds like, I wonder why so many organisations, universities, advocate groups criticise and against it. There must be a reason behind it and understand that probably is not within the best interest of general public.
I'd love to know how someone can trick this entire system with an image that looks nothing like a CSAM image.
To extremely, overly simplify it, they use math. Using machine learning against machine learning for example, to generate targeted images that can cause hash collision and trigger false positive.
 
If this system is as secured as it sounds like, I wonder why so many organisations, universities, advocate groups criticise and against it. There must be a reason behind it and understand that probably is not within the best interest of general public.

To extremely, overly simplify it, they use math. Using machine learning against machine learning for example, to generate targeted images that can cause hash collision and trigger false positive.
I wonder why apple didn’t think about the negative effects of the system they made.

As for your second point, I know you can force a collision, but how is that forced collision going to make it through the second process on the server?
 
If this system is as secured as it sounds like, I wonder why so many organisations, universities, advocate groups criticise and against it. There must be a reason behind it and understand that probably is not within the best interest of general public.

To extremely, overly simplify it, they use math. Using machine learning against machine learning for example, to generate targeted images that can cause hash collision and trigger false positive.

For two reasons:
1) The person is uneducated on the situation and thinks Apple is now able to just see all the contents on your phone at any time whenever they want (which is a ignorant sentiment I've seen all over this forum, including this thread).

2) The concern is how this could be used in the future against us, which is a valid critique. I don't think anyone is critical of the feature for what its being used for in the moment and many would agree on paper it's probably not a bad idea. It's certainly more private than what Google does. The issue is having an on-device system like this opens a potential for abuse. No country is telling Apple that they now have to start using this technology to scan for political opponents, but there's also nothing preventing a country from now requiring Apple to do that. That's why people are worried.
 
  • Like
Reactions: Homerpalooza
I wonder why apple didn’t think about the negative effects of the system they made.

As for your second point, I know you can force a collision, but how is that forced collision going to make it through the second process on the server?
I have no idea. Not in the field of machine learning and image recognition. Just based on some basic tutorial on machine learning and neural network I figure collision is possible and not particularly hard to do these days.

As for second layer on apple server, human reviewer presumably only gets lower res version of the original image that is good enough. Because the match is a fuzzy one, collision can trigger quite a few human reviews as a result. When the accoint owner’s fate hangs in the balance of a stranger, I dunno, but that’s beyond the scope of this discussion.
 
  • Like
Reactions: dk001
For two reasons:
1) The person is uneducated on the situation and thinks Apple is now able to just see all the contents on your phone at any time whenever they want (which is a ignorant sentiment I've seen all over this forum, including this thread).

2) The concern is how this could be used in the future against us, which is a valid critique. I don't think anyone is critical of the feature for what its being used for in the moment and many would agree on paper it's probably not a bad idea. It's certainly more private than what Google does. The issue is having an on-device system like this opens a potential for abuse. No country is telling Apple that they now have to start using this technology to scan for political opponents, but there's also nothing preventing a country from now requiring Apple to do that. That's why people are worried.
Second one seems to be pretty valid. But what I wonder is the technical reason behind why apple’s current implementation is dangerous. There are several universities building their demo system as close as Apple claims, and one university (Stafford university? I forgot) says the end result is so destructive they halted the research. Sadly nothing technical in that article iirc and what I remember seems to be too few to pinpoint the exact article.

As for the first one, it’s true that people are often stupid at times, but I’d say thanks for fuzzy match and false positive, there’s a nonzero chance some photos of No CSAM nature can get checked by strangers inside Apple. Granted, it is very low (or the system would be insufficient for production use) but risk is there.
 
  • Like
Reactions: macOS Lynx
Second one seems to be pretty valid. But what I wonder is the technical reason behind why apple’s current implementation is dangerous. There are several universities building their demo system as close as Apple claims, and one university (Stafford university? I forgot) says the end result is so destructive they halted the research. Sadly nothing technical in that article iirc and what I remember seems to be too few to pinpoint the exact article.

As for the first one, it’s true that people are often stupid at times, but I’d say thanks for fuzzy match and false positive, there’s a nonzero chance some photos of No CSAM nature can get checked by strangers inside Apple. Granted, it is very low (or the system would be insufficient for production use) but risk is there.
Saying the results are destructive is putting emotion into their statement.

Assuming the system really does work the way Apple describes it, then for the end user there's no genuine worry. Even with the potential of false positives, Apple has human intervention as part of its steps, while at the same time the amount of effort that would go into first identifying and obtaining the hash of known images is not worth it.

It's the same logic behind why people don't really brute force passwords anymore: yeah, you could and get some targets, but why use a complicated system when it's infinitely easier to just scam someone with a fake login website? It's the same with spamming someone with known child abuse images or disguising the hash in completely normal photos. Yeah, it can be done, but the amount of effort that would go into that? Your average iPhone user is not a target for that. It would be easier to just download a photo from Twitter or Tumblr (unfortunately, the public web is littered with that kind of content and its really gross and sad), email it to someone's gmail account or send it as an SMS, and then just call the police.

The real concern is how this technology could be used in the future. The easiest example would be China requiring a database on iPhones that flags photos of Tank Man in Tiananmen Square. There does exist a prime opportunity to oppress people with this kind of technology (though I would argue that threat already exists; nothing is stopping China from telling Apple to start using face recognition in photos to identify targets, and that data isn't on-device only).
 
  • Like
Reactions: Homerpalooza
I believe that Apple has again and again, shown themselves as a company that supports human rights, privacy and protection for children. They were some of the first to come up with safety features on their device for parents to protect their children, and they will continue to advocate against predators that try to take advantage of technology to harm people.

This red-herring that "photos of my semi-naked child at the seashore will have big bad Apple coming down on me and calling the police" is beyond ridiculous.

Apple is protecting children from getting spammed pornography. Apple is working with agencies to stop the spread of existing pornography.
 
Saying the results are destructive is putting emotion into their statement.
That’s a quote from the original article that I remembered, not what I think about. I’m not part of that research team. Yeah, it sounds emotional, but humans are emotional creatures so, can’t fault on that one really.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.