With Apple's CSAM software and Apple bringing on-device scanning to the Mac with MacOS Monterey, will you still trust your Mac?
Well that's the thing. The NeuralHash code has already been found in iOS 14.3, supposedly just not activated by Apple, but that might change in iOS 14.7.2 but there's no way to know for the end user.I thought this was just iOS devices… I’m guessing if I don’t update to MacOS 12 and iOS 15, then my Mac and iDevices will be fine?
You don't have to trust anyone, logout iCloud and block all outgoing connection to Apple(on your router if possible) and you will be fine.
Regarding iCloud. We have to trust Apple's word the NeuralHash is only activated when logged into iCloud, but since it's happening On-device, it may be running at all times.. Even if it's not, since the system is already in place, it can probably be activated by means of a simple software update..
No. I wont be installing Monterey again or buying a new M whatever mac again for now.With Apple's CSAM software and Apple bringing on-device scanning to the Mac with MacOS Monterey, will you still trust your Mac?
Well, I don't trust Apple. But I don't want to use Windows so I stopped using iCloud and decided to maintain some kind of trust that my Mac is not spying on every file that I have on it. I really try to be not very paranoid but you never know what it's going on in the background. I also started using open-sorced software more because it's more transparent. Once the trust is broken it'll never be the same.
With Apple's CSAM software and Apple bringing on-device scanning to the Mac with MacOS Monterey, will you still trust your Mac?
This is mighty scary. I did not know this at all… Hmmm. I have kids myself and anything sexual to do with kids is abhorrent that goes without saying.Well that's the thing. The NeuralHash code has already been found in iOS 14.3, supposedly just not activated by Apple, but that might change in iOS 14.7.2 but there's no way to know for the end user.
These are all very good points. When you say, “once you are flagged, that’s it”, what do you mean? Is it a visit from the Law or something? Please excuse my ignorance. What a terrible journey Apple are about to embark on…Film Camera. Develop and print your own.
Problem fixed...
The person I feel sorry for is the designer of dolls, you know, the plastic figures that go into the molds before they are dressed.
And the teacher who has a collection of life-like baby robots that are used to teach teenagers what it is really like to look after babies.
And the parents and grandparents who put scans of 'baby in bath' photos from the 50's and 60's up on FaceBook.
And the Paediatrician writing a book on baby diseases, including nappy rash and all its causes.
And the lawyer writing a book on FGM* with photos.
AI cannot, or ever will be 'Just Right'. It will either be too sensitive and bring up false positives, or not sensitive enough and miss illegal stuff.
It's the false positives that the AI will flag that worry me. Because once you are flagged, that's it.
* Look it up...
No, nor will I. I have a MBP from 2012 that got its last update with Catalina and my Mac Mini is compatible but I certainly won’t be upgrading.No. I wont be installing Monterey again or buying a new M whatever mac again for now.
Agreed, I’ve been using this for years on my Macs. Since about 2008 ish I reckon. However, even this has been hobbled recently I believe. Something to do with it now not having access to certain parts of the OS? Sorry, I used to be more technical with the OSes but my days are mostly spent in the Cisco world these days.That's why I decided back in 00s to use Little Snitch on most Macs I have boughten sense!
My worries are not with CSAM. It goes without saying me and 99,99% of people can agree upon the fact the spreading of CSAM is wrong must be fought within the rules of the law, yet still protecting the privacy of the innocent!Film Camera. Develop and print your own.
No, I do actually believe Apple's algorithm is smart enough it can distinguish between those sort things and actual CSAM. The human verification step they've added furthermore eradicates any room for this sort of error. I do firmly believe that.The person I feel sorry for is the designer of dolls, you know, the plastic figures that go into the molds before they are dressed.
Can't "still trust" something you don't have, but we have permanently canceled plans to acquire Mac desktops for my wife and I. Instead I'll build us a pair of Linux desktops. I'd hoped to avoid having to do that as I simply don't care to do it any more. But it is what it is.With Apple's CSAM software and Apple bringing on-device scanning to the Mac with MacOS Monterey, will you still trust your Mac?
This is somewhat of a false scenario as they have to be matches of known child abuse photographs on an existing database.Film Camera. Develop and print your own.
Problem fixed...
The person I feel sorry for is the designer of dolls, you know, the plastic figures that go into the molds before they are dressed.
And the teacher who has a collection of life-like baby robots that are used to teach teenagers what it is really like to look after babies.
And the parents and grandparents who put scans of 'baby in bath' photos from the 50's and 60's up on FaceBook.
And the Paediatrician writing a book on baby diseases, including nappy rash and all its causes.
And the lawyer writing a book on FGM* with photos.
AI cannot, or ever will be 'Just Right'. It will either be too sensitive and bring up false positives, or not sensitive enough and miss illegal stuff.
It's the false positives that the AI will flag that worry me. Because once you are flagged, that's it.
* Look it up...
You do not understand the CSAM hash database. This is not AI-based.Film Camera. Develop and print your own.
Problem fixed...
The person I feel sorry for is the designer of dolls, you know, the plastic figures that go into the molds before they are dressed.
And the teacher who has a collection of life-like baby robots that are used to teach teenagers what it is really like to look after babies.
And the parents and grandparents who put scans of 'baby in bath' photos from the 50's and 60's up on FaceBook.
And the Paediatrician writing a book on baby diseases, including nappy rash and all its causes.
And the lawyer writing a book on FGM* with photos.
AI cannot, or ever will be 'Just Right'. It will either be too sensitive and bring up false positives, or not sensitive enough and miss illegal stuff.
It's the false positives that the AI will flag that worry me. Because once you are flagged, that's it.
* Look it up...
Also, I generally dislike it when the government gets involved in regulating the tech industry.