The issue really isn't about what is on Apple's own servers, which they have the full right to do. While there can be hashes rogue states may decide to implement, there is still the issue about the iMessage on children's accounts. How is that going to work? Someone takes a picture of the beach with two sandcastles and all of a sudden it gets triggered as sending sexual content? The scanning will be battery intensive. And if it is happening in the cloud, then that is even more of a privacy problem than scanning known hashes.1. It's not a scandal. They announced it publicly, and have now gone into significant detail on the implementation. The majority of the "backlash" (such as it is) is down to people not understanding how it works, and to some pretty shoddy media reporting.
2. To answer your question, no. Obviously not.
People are in uproar about the wrong thing. The nanny feature is what is scary because it seems to be using AI to do it. And far as I know it is on device. While I'm an adult, does this mean every single picture will now be scanned for training? I already heard laughable results with my beach story above, as the English police had many false positives with that alone. Sand. Deserts.