Yes, I do still believe that Apple is a privacy-oriented company. I understand the possible ramifications, and I believe moving from technological privacy protection (you can't access information because the design of the phone prevents it) to policy privacy (you can't access information because we say so) is a step backwards.
But Apple took two important steps to mitigate that risk:
(1) They left the process on the phone and not in the cloud. I've heard assertions that it is the other way around that would be preferable, that if the process is on Apple's servers then you can protect yourself by not putting content on their servers. This is a very good point. But if the process is on the servers then that process truly becomes a black box. It certainly would have become far easier for Apple to use the feature in other ways without disclosure.
(2) They baked the comparison data (the CSAM database) right into iOS. This is a good thing. As far as I can tell from reading about it, there is no mechanism for your phone to get an updated list of photos without an iOS update. That means there is no background process that could be co-opted to get a list of other photos to compare onto the phone. It also means that watchdog groups and individuals can examine the list that is in iOS and confirm that the list is still only the CSAM list, and not other data.
I still see what I believe to be misunderstanding of the "scanning" that is being done on the phone via this process, but that could be a nomenclature disagreement. If one is worried about an iPhone evaluating every picture you take, determining if it is CSAM and alerting authorities, then that's not what is happening. But your phone is already scanning your photos this way, and has for years: that enables you to search for, say, "dog" and Photos presenting you with a list of your photos that have dogs in them. It enables Photos to offer suggestions of other pics with faces that you've already identified elsewhere. But this, too, remains on the device. It would be really convenient if I identified a face on my Mac and have that face identification automatically transmitted to the phone (or vice-versa), but it doesn't.
I feel that with this proposed feature, Apple has done its very best to keep it as privacy-preserving as possible, while still gaining some benefit for law enforcement that wasn't there before. I understand the point that "you don't get it, this isn't about CSAM, it's a bigger issue" and I appreciate that, but I do believe that the context does matter. In the same way, if Facebook announces some way in which they are going to improve your privacy, that alone doesn't make me feel like suddenly Facebook is a privacy-protecting platform.
(By the way, the argument "I don't care, because I'm not doing anything wrong. Apple can look at it all." is a bad argument that completely misses the point of what people are worried about.)
You might be cynical about why Apple is doing this. You might think it's a public ploy to undercut criticism that iPhones are the phones of pedophiles. Or you might think that Apple is doing this just for a marketing push: hey, look at us, we're against child porn! Or perhaps you take them at their word that they really ARE concerned, and really ARE sensitive to what law enforcement has said about it, and wants to help. No matter what the reason, I don't believe Apple has ceased to be a privacy protecting company. They have been public on this issue, and they have implemented it in a way where they can be monitored and called on the carpet if something seems amiss.
If you believe that they are lying to you, that their description of how this all works is a load of bull and they're tracking you anyway...well, if that's the case, then this isn't new and you wouldn't be swayed by any of these arguments and you shouldn't be.