Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Surprised some are pleased its ON DEVICE, when that is the same machiavellian potential as the CSAM. Indeed its likely its the same implementation, using the same technology, but rebadged. Now it can be switched on or off, but history shows us, that often means nothing.

No, it's not the same technology.

The CSAM Detection System was exceptional bad at discovering nudity or pornography in general.

It is similar technology to what has been in iPhoto / Photos for 15 years or more.
 
A secure hashing algorithm should produce a totally different hash if even one pixel is modified, which would make circumventing that system trivial. To work around that, Apple’s system (and others) apply some fuzzing to help “undo” modifications. That fuzzing inherently also increases the likelihood of hash collisions, where two different images produce the same hash.

The consequences of a hash collision are, of course, pretty dire in this specific case. Even an account being flagged for review could mean massive headaches if the person was innocent. Apple tried to design around this by requiring manual review and requiring a number of matches before that review would occur, but that doesn’t fix the unrelated issue of who controls the hash databases.

Apple was going to require a hash to be present in multiple databases, but as we’ve seen recently with Russia and Belarus for example, it’s possible for a state to develop so much influence over its region that it effectively has puppet states…and it’s not like Russia’s government at this time is particularly known for its love of freedom of expression. So what then, does an oppressive government with puppet states tell Apple to add its preferred hashes or ban Apple imports? This leaves the system vulnerable to abuse against innocent people for reasons completely unrelated to CSAM.

That’s what the issue was.
There are a lot of ‘should, going to, possible to, could mean…' which equates to a lot of if, maybe and nothing. If you’re going to contradict me, use facts instead of maybes.
 
No, it's not the same technology.

The CSAM Detection System was exceptional bad at discovering nudity or pornography in general.

It is similar technology to what has been in iPhoto / Photos for 15 years or more.
Hans. Are you 100% sure, as history shows many have overridden privacy many times often in the guise of something looking very basic, and on occasions optional user switches have shown to be not working whether on or off, hence the fines levied and that's only for what was found out.

So you tell us precisely what technology the 'new' system employs, because its ON DEVICE SURVEILLANCE and its that which is the slippery slope.

We know companies adjust their 'privacy' according to governments requests, aka China, and with several BILLION apple devices it is intrusive to even consider ON DEVICE surveillance of any type. Nothing stops them implementing it on their own systems, but our systems should be left alone. We pay for them, we pay for the energy for them and we pay for the performance and there is a legitimate expectation of absolute privacy on OUR devices. We don't rent the devices from Apple We Own them and that should make them sacrosanct from ON DEVICE surveillance, which it is, whatever way you wish to look at it.

And no less than MacRumour recently published:
"According to MacRumors, security researchers Tommy Mysk and Talal Haj Bakry found that Apple’s device analytics data includes an ID called ‘DSID’ – Directory Services Identifier. The analysis found that the DSID identifier is unique to every iCloud account. It is linked to a specific user and includes their name, date of birth, email, and all information stored on ‌iCloud‌."
 
If this has been build into the system, it is not ok. So let‘s say Apple integrated this feature in UI components like UIImageView this does also mean that every time an image is displayed using this compnent, some AI action takes place in the background, analyzing your private data and consuming power.

If this is the case, Apple build a system of total surveillance into the OS. Maybe just not linked to the cloud or authorities right now.



gsm-eavesdropping-FB.jpg
 
At some point someone has to be trusted to check for this stuff, or we just have to live with CSAM.
Actually no, police could do police work instead of mass scanning everything.

Plus maybe you don't realize, but the scanning is done by a private entity that isn't accountable even as little as the government is. For this article, who the f**k* is apple to decide what i want or not want to see on my phone?
 
At some point someone has to be trusted to check for this stuff, or we just have to live with CSAM.
CSAM is the result of child abuse, which is happening mostly within families and other places of trust. This is the place in which we have to work to solve this very real problem.

I fear that politicians who argue in favour of more surveilance in the US, the EU, or the UK will point to Apple's feature to argue that those scans are harmless for the privacy of the users. That recognizing sensitive pictures is something else than recognising CSAM is something they refuse to understand.

We already have spam filters and we all know how imperfect they are, I don't think those filters will be any better.

Which means that if those filters will ever being used to find and report CSAM, many many many people will have their private communication and documents read by some law enforcement officer. I think that is problematic and Apple is building the infrastructure for that possibility into the iPhone.
 
Then we have article today emphasising the slippery slope, already occurring in China.

"But soon after Mr Xi secured a third term, Apple released a new version of the feature in China, limiting its scope. Now Chinese users of iPhones and other Apple devices are restricted to a 10-minute window when receiving files from people who are not listed as a contact. After 10 minutes, users can only receive files from contacts. Apple did not explain why the update was first introduced in China, but over the years, the tech giant has been criticised for appeasing Beijing."
 
Built in weenie detector? Wonder how they trained that AI….
That AI is now a failed launcher, living in the basement, claiming it's got a telework job, which its mom and dad AIs don't understand. It has no social skills, but a following of other companys' p0rn screening AIs on LubeRumors.
 
So, this p0rn screening entity, runs"locally" on the phone? And its aware enough to discriminate between prurient expression, beach photos and art? Will it constantly hector you for decisions until you hate receiving photos or turn it off?

Can you really turn it off? Is it really Opt-In at all? How does its generative adversarial network model get updated? Does it log YES/NO and HASH for future criminal proceedings? Might you accidentally get bombed with whatever p0rn is illegal in your jurisdiction?

How many of you have the wherewithall to effectively sandbox a device, capture netflow, and reverse engineer code to reveal what your walled-garden, black-box devices are up to?

But let's face it, there's no point getting all OCD paranoid about it. We're all in the churn now, and there's nowhere else to go. But let's not delude ourselves into sincerely trusting these "tools." They're not our tools. WE'RE the tools. (Not sure that came out right ;-)

How subtle a form can manipulation take? Fight the Powe... HA! Nah.. Have a drink. Play a game. Send snarky text. That's all we're here for, right?
 
Because it's rare for a guy to receive unsolicited nudie pics of women.😉
Perhaps not as rare as you think. And there are more than two combinations of sender & receiver gender. In any case as long as this new Apple content warning system is something that can be turned off and doesn't report back to Big Brother and the Junior Anti-Sex League, I suppose it's OK.
 
  • Like
Reactions: Victor Mortimer
There are a lot of ‘should, going to, possible to, could mean…' which equates to a lot of if, maybe and nothing. If you’re going to contradict me, use facts instead of maybes.
I mean, impossible to exploit a system that never shipped. The specific concerns that privacy advocates and security researchers took with this are well-documented and easy to find with a quick Google search, but I did my best to summarize, including giving Apple credit where due with their attempts to address those concerns. My apologies.
 
  • Like
Reactions: steve09090
My niece got nude pictures from boys at school when she was 14 that she didnt request and definitely didnt want. Is it ok if God saves her from those pictures? Or in this case Apple? There are some important real use cases for these.
I hope those pictures and the senders got reported to the principal, the boys' parents, and the authorities.
 
Does this mean apple will be analysing everything we see? This would make this like the child porn detection feature, where the main takeaway was that all photos would be analysed via a third party.
Nah.... their crappy on device AI will do it - which means it will tell you someone sent you porn- then all excited when you spam the show button the disappointment will hit when it's grandma with a rolling pin in her hand Siri though was obscene.
 
Opting into alerts, versus opting into the SCREENING is a valid distinction. Prompting a valid question: Are you capable of discerning the difference? Understand and accept that these processes can be so well hidden, most folks could never determine if they are running or not.

If it were proven some time later, that the service was indeed screening (or hashing) hashing, logging and forwarding all the time... Would that be okay? If it turned out Apple's license reserved the right to urn it on and off whenever they wanted, would that be okay?

If it were determined sometime later that the on-device (or cloud) screening was not fixed, but actually parametrized to look for other things, and that Apple could add parameters any time it wanted (or was ordered to in a sealed warrants), would that be okay?

All this screening could already be under way, and you'd never know it. So, the actual question is: Do you care?
 
So customers pay for devices, pay for the energy to charge the devices, but Apple can usurp that device again in the name of protection, but again its ON DEVICE where Apple has no right to act as arbiter.
I think this is correct.
I'm involved in sensitive work and I would not be allowed to use Apple devices if APPLE can interrogate my device.
Are you allowed to use your apple device? Seems it would be big news if that were the case. Didn’t the army replace android with iOS? And the nypd went all iPhone?
There have been so many instances where so called optional functions have proven to operate in any event. How many companies have been fined for mis using customers data, but in this case it’s worse. It’s selling you a device, then expecting you to pay for the energy to charge your device, utilise function that must affect performance. This may not be CSAM in name, but the principle is the same. SURVEILLANCE ON DEVICE. NO NO NO Apple.

Just a cursory look at how many times Apple have been fined over privacy. Its no good talking the talk if then they don't walk the walk.
How many times in the US? That should speak volumes.
Its the same slippery slope that industry experts condemned Apple for in the CSAM ON DEVICE surveillance, and it is the facts its ON DEVICE that makes it that slippery slope.
Still it’s the court of public opinion and not legality that stopped apple from moving forward with on device csam scanning.
Of course they can check whatever they like on their own servers, but usurping users purchased Apple devices is not on. We've already seen how Apple acquiesce to China's demands in certain situations. The same is true with US agencies who cry out for a backdoor and this is the slipper slope to give one.
Maybe but seems it is legal.
Anyone who suggests Apple have never been in breach of rules only has to do a cursory check.
This is not throw the baby out with the bath water. There’s probably not a Fortune 500 corporation in the US that has been totally free and clear of any investigation whatsoever. Or some lawsuit alleging some misdeed. Let’s not take one concern and make it the majority. That’s a slippery slope also.
They are by no means the worst, but this is the slipper slope...again in the guise of protection, but it flies in the face of Apple's own comments on privacy.
Privacy was never about protecting people from breaking the law.
Don't be fooled by it being an option, as we've seen in the past how optional functions did not even have to be switched on by users.
Vote with your $$$.
[…]
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.