Of course it’s still true. Apple cannot see any of the scanning info on your phone. The only time they would ever see any scan results is if you have multiple CSAM images on your device and you attempt to upload those to iCloud. You never had any true privacy on iCloud to begin with, as Apple can access all your files there if they so desire. Read the iCloud legal document on Apple‘s website.
I understand the technology is complex, but some of you really lack basic reading comprehension skills. Either that, or you’re so eager to find evidence of conspiracy or corporate overreach, that you interpret everything through that distorted lens.
ironically, things are now going to be even more private than before, yet many of you are acting like this is a step backwards, because, again, you are interpreting things through a distorted lens.
Apple CSAM detection mechanism is much more than a simple comparison of hash codes (in fact, "hash" isn't even the right word here as these "hashes" are more like keys which contain specific properties of the image so it can be processed by ML). What it does is to scan and correlate image properties to create a similar key, and the specific properties in this key are then weighted against keys from the CSAM database. If a (undisclosed) number of parameters are within an (undisclosed) threshold then the system assumes the image content is similar (*not* identical) and the photo will not get its upload voucher.
If the number of images where the upload voucher is withheld exceeds (another undisclosed) threshold then the account gets flagged.
Now, since you seem to think this is all A-OK lets address the issues here.
First of all there is the reliability of the system. Apple states an error rate of less than 1 in a Trillion scans. That sounds very low at first but it really isn't. First of all, it's a per-scan figure, and users tend to have more than one photo. Second, it's a calculated figure under the assumption that there are no bugs or security holes. Third, and most importantly, it's a figure based on a statistical analysis, which means that in reality the system may mis-ID much more often (a lot more often, in fact), and because of image analysis works it's likely that this will happen with images which share certain properties (for example a series of photos taken by the same user).
Then of course there is the fact that pretty much all of Apple's software has bugs (lots of them) and it would be naive to expect that only this functionality will be the single exception. This of course is already in addition to the fact that image processing and face recognition themselves have a very long and solid track record of completely failing to correctly identify people and objects on photos reliably (and the track history of correctly identifying CSAM by ML algorithms is even worse).
Of course, there's still the second step which is manual checking by a human Apple employee. However, this doesn't really solve the issue. For once, according to Apple, the human check is performed based on the image keys, i.e. Apple's support staff won't see the photos. Therefore it is questionable that an assessment by Apple's staff will come to a different result than the ML algorithm that made the classification. And even if that wasn't the case, it's pretty naive to trust the company that already ruined the life of a teenager after "screening evidence":
https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
If you really want to trust the same people with "doing the right thing" when you're flagged as a serial pedophile then be my guest.
Now, you might still wonder what's all the commotion about since everyone else does the same, right? And it's correct that Google, Microsoft and most other cloud providers perform similar scans on images that people upload to their cloud space. In fact, even Apple has done the same with iCloud. And they all have teams which look at images that get flagged by the system, and that's OK since it's what you agreed to with the T&Cs.
What's new is that Apple now moves the detection process from the cloud to the user's device, which comes with its own problems.
For once, once the system is in place there is little to prevent it from scanning all photos (not just the ones that are uploaded to iCloud), or other content. The algorithm doesn't care, it will scan anything it's directed to and look for anything it's told to look for. Even if you trust Apple to "do the right thing" (which is extremely naive, considering that Apple is happy to rat out its Chinese users to the regime in China as long as the profits stay high), there's still the risk that the mechanism will be abused by 3rd parties. Which isn't really that far-fetched if you have been following the news.
In addition, because the manual check is now performed without looking at the images in question, it's bound to fail at a similar rate as the automatic flagging. Which is bad, as anyone who has ever been accused of being a pedophile while being innocent can tell you. It's one of the very few things where rumors alone can ruin a person's life.
Lastly, there's the question as to "why". Apple claims that this protects user privacy, but that is completely false. First of all, it's a mechanism that accuses the account owner of storing child porn images, an accusation that requires further investigation, and that investigation can not happen without the identity of the account owner and the evidence (the images in question) being known. With the new system, even in the second step of manually checking by Apple staff, it can identify the account owner. So there goes your privacy. Of course you could argue that if you're accused of a crime (even more so that's so horrible as child abuse) then privacy is less important than investigating the crime, and that's OK (I agree with this). However, that is the same with server-side scanning as performed by Google and others, and until now by Apple.
This however raises the question why we even need on-device scanning if the only intention was to check photos that are going to be uploaded to iCloud for CSAM? Because the same result can be achieved with the existing server-side scanning. And people are fine with server-side scanning since they know that the information they upload is shared with the cloud provider, which maintains a separation between data on the device and in the cloud.
And since the very same can already be achieved with existing server-side scanning, the logical conclusion must be that the reason for the introduction of on-device scanning are beyond the search for evidence of CSAM. And people are rightfully angry that a phone manufacturer like Apple, which touts its credentials as protector of it's users' privacy at every opportunity, is installing a facility which allows the remote search of its users' devices, and does so with no control by the users themselves.
Apple's actions leave a very bad taste, and raise a lot of questions. You really have to be willfully ignoring what has been going on the last years to think that this is all just a big nothing-burger.