Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Analog Kid

macrumors G3
Mar 4, 2003
9,364
12,620
YouTuber Louis Rossmann says MacOS is scanning for dangerous memes and illegal material without consent.

So it begins again...

By "it begins again", I assume you mean Rossmann's self promoting.

I can already guess that this thread is going to be hundreds of comments long because people view this guy as some sort of folk hero, but can we just start by acknowledging that he makes a very comfortable living sitting in a comfy chair with a boom mike spouting conspiracy theories?

I really wish there was a way to watch and debunk something without that person getting credit for the view.

He promotes a previous video he did, then spends the first minute and a half proving he doesn't understand Apple's abandoned CSAM scanning. Then he reads someone else's content aloud (is it fair use to dictate the entire content of someone else's work on a channel you're making ad revenue from?). Then promotes more of his content about a totally unrelated right-to-repair statement from Deere (?!).

The article he read is scare mongering and Rossmann is just trying to amplify that for personal gain. We know that MacOS does local image scanning for content to do Text recognition and background removal, and object and keyword detection, etc. He opened an image, and MacOS hit an online API, nobody-- not the author, nor Rossmann has shown any indication that personal data was transmitted to Apple servers, yet here's Rossmann screaming that this is a backdoor CSAM scanner and will eventually start checking for "cartoons of the prophet".

And then a bunch of "Google flagged someone, so let's complain about Apple". He's added no new information to the discussion, done no sleuthing or research, just read something someone else said and let his confirmation bias run wild.

This is basically you quoting Rossmann quoting some other guy that Apple is lying about turning off CSAM scanning and invading our privacy without a lick of actual evidence beyond what could just as likely be a version check. Information-free from beginning to end.

Wake me up when someone finds something other than paranoia made manifest.
 
Last edited:

Sanlitun

macrumors 6502a
Original poster
Sep 19, 2014
560
580
127.0.0.1
By "it begins again", I assume you mean Rossmann's self promoting.

I really wish there was a way to watch and debunk something without that person getting credit for the view.
Yeah fair enough and it may not be credible. We have to trust Apple that it is not personal info being sent, and either you do or you don't. More transparency is needed. It's being depicted that the API access has just started, but perhaps it has been going on for some time and is trivial.

Here is the source that he is citing:

 
Last edited:
  • Like
Reactions: pdoherty

cthompson94

macrumors 6502a
Jan 10, 2022
812
1,164
SoCal
Yeah fair enough and it may not be credible. We have to trust Apple that it is not personal info being sent, and either you do or you don't. More transparency is needed. It's being depicted that the API access has just started, but perhaps it has been going on for some time and is trivial.

Here is the source that he is citing:

So I read that article and the person who created it honestly didn't write a very good article and just says that a program he used "little snitch" found that "mediaanalysisd" was running even without iCloud and all this other stuff on and opt in.

This sparked my curiosity because a program of mediaanalysisd is kind of vague, so when you Google Search "mediaanalysisd" this Hacker news site comes up and people explain a few things:

"mediaanalysisd has been around since forever, a process most associated with Photos’ facial recognition but later expanded to other things like text and subject recognition. There are non nefarious reasons for mediaanalysisd to connect to the internet, like to update its models, which are run locally."

Another poster mentioned that you will notice the same with "smoot.apple.com" is used for Spotlight search, so if you ever use that then this same Jeffery Paul person would have flagged that also (I am asumming he didn't use spotlight search)

Another mentioned that this mediaanalysisd could be also used to help determine duplicates and that in macOS Ventura exposes new object and scene recognition features for images, including background removal in Preview (analogous to "Copy Subject" in Photos but without requiring the use of that app)

Jeffery Paul mentioned he didn't use the photos app, but in Ventura there are features built into the OS using preview.

This Youtuber seems to just find stuff that will fit into the wheelhouse (in this case privacy) and dig up articles/blog posts to help generate views without backing things up. Even the 5 second Google search and finding this Hacker News Site I discovered they took down Jeffery's post because it was unsupported and proven incorrect.
 

gilby101

macrumors 68030
Mar 17, 2010
2,979
1,643
Tasmania
mediaanalysisd is not linked to using iCloud. It is used by, for example, Visual Lookup - a feature of macOS since 12.x (not sure which x). So outgoing connections from mediaanalysisd do not prove any underhand behaviour by Apple.

Read Howard Oakley's analysis Is Apple checking images we view in the Finder? where he has searched the unified log for indications of what is happening. No evidence of CSAM scanning.
 

gilby101

macrumors 68030
Mar 17, 2010
2,979
1,643
Tasmania
If you look, Apple has said that limited information will be sent to Apple in various circumstances.
From Settings > Siri & Spotlight > Siri Suggestions & Privacy > About Siri & Privacy > Siri Suggestions:
"When you use Siri Suggestions, Look Up or Visual Look Up, when you type in Search, Safari search or #images search in Messages, or when you invoke Spotlight, limited information will be sent to Apple to provide up-to-date suggestions. Any information sent to Apple does not identify you, and is associated with a 15-minute random, rotating device-generated identifier. This information may include location, topics of interest (for example, cooking or cricket), your search queries, including visual search queries, contextual information related to your search queries, suggestions you have selected, apps you use and related device usage data to Apple. This information does not include search results that show files or content on your device. If you subscribe to music or video subscription services, the names of these services and the type of subscription may be sent to Apple. Your account name, number and password will not be sent to Apple."
That include Visual Lookup which involved mediaanalyisd.
There is more to the Data & Privacy statement.

Certainly, Apple does make it clear (if not obvious) that services like mediaanlysisd will connect to Apple as part of their normal operation.

If this is of concern, turn off Siri Suggestions. And accept the downside to doing so.

It is incredibly hard to find all the relevant privacy stuff from Apple. There is a great deal in various spots in Apple's web site. And, in the end, you just have to believe that Apple abides by it.

On the other hand, if I had child exploitation material I would not keep it on a Mac.
 

galad

macrumors 6502a
Apr 22, 2022
615
495
Then reverse engineer it and look at what it's actually doing. Making exceptionally claims without any proof has gotten old. Who cares, wake me up when they actually analysed what's going on.
 

bogdanw

macrumors 603
Mar 10, 2009
6,152
3,048
I don’t use Photos, I’ve disabled mediaanalysisd
launchctl bootout gui/501/com.apple.mediaanalysisd
launchctl disable gui/501/com.apple.mediaanalysisd
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,127
2,707
"mediaanalysisd has been around since forever, a process most associated with Photos’ facial recognition but later expanded to other things like text and subject recognition. There are non nefarious reasons for mediaanalysisd to connect to the internet, like to update its models, which are run locally."
This! (and for the Rossmann crowd: !!!1111!1) The guy should stick to doing bad soldering jobs and not talk about things he doesn't understand. Isn't he still proud he dropped out of college? Well, you don't need a PhD or any other degree to understand that things need updates, such as models used for inference in local search.

Would he bother to analyse what's actually being transferred to and from Apple, maybe he would see that and god forbid understand.

That being said, I just caught Windows spying on me, it wanted to connect to update.microsoft.com. Uh-oh, Microsoft is spying on us! 🤣 (yes that was a joke, I don't know if it's the correct URL, but Windows does the same)
 
Last edited:

Danfango

macrumors 65816
Jan 4, 2022
1,294
5,779
London, UK
Oh Rossman. My arch enemy. Had it out with him a couple of times over the years in other places.

The guy is a predatory conman running a rip off repair centre who has managed to somehow create a YouTube thought bubble to attract more gullible customers. Check out his public reviews if you want the real picture. Low quality repairs, constant returns, problems galore. That is the entire third party repair industry which is the problem with right to repair and all the stuff that is being promoted by him. I won't even go down the technical route of discrediting him too far but any board level repairs the guy does should be for data recovery only and the machine should be discarded after that. We're past the age of being able to do reliable repairs. They might look like they work but the boards are extremely complex impedance controlled networks which have gone through in system testing. No one has the ability to rerun and qualify the boards after a repair other than the manufacturer now. Unless you want to fish out for $1m+ bits of kit. Replacing lumps of stuff is fine.

Of course now he's got an audience he diverges into topics he has little to zero understanding of. This is a common tactic used by various fringe communities to maintain viewership. Probably be COVID and 5G garbage next 🤦🏻‍♂️

Anyway enough ranting. I don't believe this. There isn't evidence to support it:

mediaanalysisd does hit the network. Why wouldn't it? The thing is an on-device ML processing engine for object identification (you know when you search for "cows" in your photos library) . It has to go to the network to get identification models and probably read photos metadata. Until I see actual Wireshark network analysis of what is happening when it is hitting an API endpoint, this is entirely speculation and probably ********. Little snitch throwing up a warning that it's connecting to the network doesn't tell us anything other than it did connect to the network, not why or what it was doing.

Throwing Occam's Razor at it:

1. If this was an actual issue, really high profile security professionals, not conmen and bloggers would have been all over this like a seagull on chips already.

2. Apple are clever enough not to risk the reputation damage of this statement being inconsistent.

3. They are offering end to end encryption of this data in iCloud anyway.




Edit:

So I did some research. api.smoot.apple.com is Spotlight search completion service so it's probably using it to collect synonyms / doing text processing to build up a search index of some sort.

The guy who posted the blog is a moron. The guy who quoted it in a YT video is a moron. Everyone following them are morons too. Ugh.
 
Last edited:

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
He promotes
He promotes the right to repair, which is something that apple steadfastly fought against. Even their repair kit is designed and priced in such away that makes a repair by apple easier and cheaper then using that kit.

I didn't watch the video, but I overall, he knows what he talks about when it comes to what apple and other manufacturers do to prevent the consumer to repair their own products.

Not everything he says is gospel, but he does make a lot of good points in many of his videos.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,127
2,707
The guy is a predatory conman running a rip off repair centre who has managed to somehow create a YouTube thought bubble to attract more gullible customers.
His followers are usually Apple-haters, it's easy to get enough viewers following his rants that way.
So I did some research. api.smoot.apple.com is Spotlight search completion service so it's probably using it to collect synonyms / doing text processing to build up a search index of some sort.
Yes, it is for searching all sort of things on the local system and make intelligent matches for different files/documents. Nothing of the personal data is transferred to Apple. Again, MS is doing this as well. The security research group at the other end of the hallway loves this stuff and they would have gone nuts by now. I can hear the crickets chirping when opening up my office door.
 

Jason J. Schneider

macrumors member
Oct 14, 2022
50
110
Athina, Attica
It really makes me sad to see Louis not doing proper homework.

First of all, that Jeffrey Paul article is full of assumptions that this particular process, mediaanalysisd is sending actual image informations to an Apple API.
This process resides on MacOS for about 10 years now. And it is also responsible for a lot of new features that came with the latest updates, doing image recognition, things like recognising text in a photo, or determining if the subject is a flower or animal. This is done so that Spotlight search can return images as a search result.

mediaaanalysisd is a part of Visual Look Up (VLU) which is a system Apple uses to recognise objects in photos like buildings, animals, paintings and more and give users information about the photo. VLU works by computing a neural hash locally that is used to distinguish photos/videos by the objects within them. During VLU, Apple requests a neural hash to compute what that hash represents and sends it back to the user. I'm assuming the database used to compute the hash’s meaning is too massive to be used locally and wouldn't be able to stay updated with new information. I really didn't like the article this video is based on because it makes claims without any real evidence or research. mediaanalysisd is a process that has been around for years and it's very easy to look up what it's used for and how to disable it.

Also, it looks like the Wired article is correct in stating that Apple has chosen not to move forward with the CSAM detection at all based on a statement given in December 2022. Apple is, however, still planning to roll out a form of CSAM detection as a parental control in the Messages app, but only as an opt in. Please read those articles carefully. 9to5Mac, Macrumors also cited the same surces inside Apple on this matter and all of them said the same things.

Please refer to Howard Oakley, documenting this more deeply: https://eclecticlight.co/2023/01/18/is-apple-checking-images-we-view-in-the-finder/
 
Last edited:

MilaM

macrumors 65816
Nov 7, 2017
1,204
2,687
First of all, that Jeffrey Paul article is full of assumptions that this particular process, mediaanalysisd is sending actual image informations to an Apple API.

Ok, but do we know what mediaanalysisd is sending over the wire? I wasn't even aware of this happening. And to be honest I also did not expect this to happen as a naive user of macOS.
 

Jason J. Schneider

macrumors member
Oct 14, 2022
50
110
Athina, Attica
Ok, but do we know what mediaanalysisd is sending over the wire? I wasn't even aware of this happening. And to be honest I also did not expect this to happen as a naive user of macOS.

Here is a two part article from the same Howard Oakley, that does a deep dive inside VLU and describes in detail what mediaaanalysisd actually does:

 

russell_314

macrumors 604
Feb 10, 2019
6,734
10,338
USA
On one hand, he's not really credible when it comes to Apple information because he has a huge grudge against Apple so if he can say something to hurt Apple, he's going to. Also, he has motivation to maintain his group of Apple haters as audience. Maybe he noticed YouTube audience engagement was going down so he needed something big?

On the other hand, Apple said they were going to do this, and to my knowledge never said they changed their mind so I would expect my files to be scanned. Does anyone have a press release from Apple saying that this was not going to happen? If they announced it and are now doing it then it's not really without our knowledge even though they're not giving a specific notification on your device. Most people don't pay attention to the tech press so one could argue that Apple should notify the user on the device, letting them know what is going on with their data.

Edit: I did some research. Many media articles erroneously reported that Apple has changed their mind and I think that gave a lot of people the impression that this was true. It was not and has never been true from what everything iPhone. Here's a quote from Apple press release. If you notice nothing in there says they are not going to implement this program. It just says they're going to take additional time before releasing these critically important child safety features and apparently they did because this was supposed to roll out months ago. The additional time is up and now the program is live.

“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
 
Last edited:

MilaM

macrumors 65816
Nov 7, 2017
1,204
2,687
I have read both articles and have also disabled Visual Lookup, as far as that's possible.

But apparently mediaanalysisd will connect to Apple's servers even when Visual Lookup is disabled.
 

Jason J. Schneider

macrumors member
Oct 14, 2022
50
110
Athina, Attica
Does anyone have a press release from Apple saying that this was not going to happen?

I don't think there was a press release, just a statement, offered to a couple of new agencies, originating from Wired:

Here is the article:

And this is what that statement says:

After extensive consultation with experts to gather feedback on child protection initiatives we proposed last year, we are deepening our investment in the Communication Safety feature that we first made available in December 2021. We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.

So they will not move forward with the CSAM in iCloud Photos. The iMessage Child Protection feature is a completely different thing.
 
  • Like
Reactions: russell_314

russell_314

macrumors 604
Feb 10, 2019
6,734
10,338
USA
I don't think there was a press release, just a statement, offered to a couple of new agencies, originating from Wired:

Here is the article:

And this is what that statement says:



So they will not move forward with the CSAM in iCloud Photos. The iMessage Child Protection feature is a completely different thing.
Well, either those statements were misquoted by the reporter or Apple changed their mind and didn’t tell anyone because it’s up and running now apparently. It’s easy to detect network traffic on macOS but I don’t think it’s possible on iOS. My expectation is it’s active on all Apple devices.
 

Jason J. Schneider

macrumors member
Oct 14, 2022
50
110
Athina, Attica
it’s up and running now apparently

Again, VLU is not CSAM. And what VLU sends to Apple is not at all the same thing as CSAM implementation could have send. And again:

  • VLU is a service that helps identifying text and objects in photos so you can copy/paste from them easily (I suspect it now works for videos also, since iOS 16). It can be disabled and is a part of Siri & Search.
  • CSAM system could scan offline all photos from Photos Library, could create some hashes and verify those hashes against a database on Apple side, if and only if iCloud photos was enabled.
  • iMessage Child Protection is a system that you can setup for your child's iPhone, and Apple can scan and blur potential unwanted material, received from unknown senders, before showing it to your kid inside iMessage. It also informs you, as a parent of that content. This one I will turn on for my kid, when he'll have an iPhone.
So, again Louis here made a huge mistake not documenting himself before hitting the record button. As you can see, these are different systems, with different goals.
 
  • Like
Reactions: Jumpthesnark

russell_314

macrumors 604
Feb 10, 2019
6,734
10,338
USA
Again, VLU is not CSAM. And what VLU sends to Apple is not at all the same thing as CSAM implementation could have send. And again:

  • VLU is a service that helps identifying text and objects in photos so you can copy/paste from them easily (I suspect it now works for videos also, since iOS 16). It can be disabled and is a part of Siri & Search.
  • CSAM system could scan offline all photos from Photos Library, could create some hashes and verify those hashes against a database on Apple side, if and only if iCloud photos was enabled.
  • iMessage Child Protection is a system that you can setup for your child's iPhone, and Apple can scan and blur potential unwanted material, received from unknown senders, before showing it to your kid inside iMessage. It also informs you, as a parent of that content. This one I will turn on for my kid, when he'll have an iPhone.
So, again Louis here made a huge mistake not documenting himself before hitting the record button. As you can see, these are different systems, with different goals.
How do you know this is or isn’t CSAM? The only information we have is Apple is sending photo data back to their servers or some server. We have no knowledge other than maybe a few rare statements from Apple’s as to what they are doing with this data.


This person is not using any type of iMessage or iCloud so I don’t see why it would be that. From my understanding, the message protection scans the picture as it’s being sent in the message, then notifies the recipient that it’s inappropriate. Whatever this is is going on in the OS and not even in Apple photos. This guy isn’t using Apple photos or iMessage.

That’s the biggest problem with any kind of spying or data collection is you don’t know what that corporation or government is doing with it. They are not open and letting in people in to see what’s going on. At least in my knowledge there’s no independent audit of this data collection. Without some independent review, we can speculate but only Apple and whoever they’re working with know.

I’m personally not going to change much. I may be a little bit more cautious about saving political memes but that’s just my choice. Everyone has to make their own decisions based on what’s going on.

It’s never good when the company that brags about privacy is spying on its users. I love Apple products and will continue to buy them, but I will call them out if they’re doing something terrible. I have a certain amount of brand loyalty but that’s based on their products which I really like. I don’t have any financial or employment connections to Apple, so I don’t have to be 100% loyal to them.

I’m personally not going to stop using Apple products because of this. I suspect this kind of spying is going on with other brands. If it was Google, I would expect it but not from Apple.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.