I highly doubt that. But if even half that many abandoned Apple, that would still be a pretty big chunk of business.
Apple's proposed technology works by continuously monitoring photos saved or shared on the user's iPhone, iPad, or Mac. One system detects if a certain number of objectionable photos is detected in iCloud storage and alerts the authorities. Another notifies a child's parents if iMessage is used to send or receive photos that a machine learning algorithm considers to contain nudity.
Because both checks are performed on the user's device, they have the potential to bypass any end-to-end encryption that would otherwise safeguard the user's privacy.
Immediately after Apple's announcement, experts around the world sounded the alarm on how Apple's proposed measures could turn every iPhone into a device that is continuously scanning all photos and messages that pass through it in order to report any objectionable content to law enforcement, setting a precedent where our personal devices become a radical new tool for invasive surveillance, with little oversight to prevent eventual abuse and unreasonable expansion of the scope of surveillance.
The above is from the link you sent me. There are so many things wrong with this text, so I guess even the experts don't know what's going on.
1. "
Apple's proposed technology works by continuously monitoring photos saved or shared on the user's iPhone, iPad, or Mac." No it's not. The vouchers are only generated and sent to Apple with the photo that is currently being uploaded. Your library isn't constantly scanned.
2. "
Because both checks are performed on the user's device, they have the potential to bypass any end-to-end encryption that would otherwise safeguard the user's privacy." -- First of all, the Messages feature is controlled by the parent and the AI doesn't kick in until the photo has been downloaded to the device and the information stays on the device, so therefore Apple has no idea that anything has happened. Only the parent is notified of questionable content. Second of all, as far as I'm aware, photos in iCloud are not end to end encrypted. Apple holds the master key to everyone's photo library and they can unlock them at any time.
3. "
Immediately after Apple's announcement, experts around the world sounded the alarm on how Apple's proposed measures could turn every iPhone into a device that is continuously scanning all photos and messages that pass through it in order to report any objectionable content to law enforcement, setting a precedent where our personal devices become a radical new tool for invasive surveillance, with little oversight to prevent eventual abuse and unreasonable expansion of the scope of surveillance." -- Again, they're not being continuously scanned. Apple outlined exactly how the software works in their documents. If the experts took the time to read everything, they might learn a thing or 2.
Also, there's more than one single check going on. The first check is on your device, the second check is in the cloud, so therefore, with this system, your photos have more privacy than before because the only photos Apple can unlock are the ones that matched the first hashing process and then those photos are unlocked and sent through another separate different perceptual hash to weed out any false positives. Then if those photos still match CSAM content, then Apple has humans verify their contents and contact the appropriate organization to report it.