I had a short play with the new functionalities and I am confused by the AI claims that Apple has made.
For example, Stem Splitter: seems to work well enough, but if it is controlled by an AI engine it should be doing better at understanding the difference between a voice and a saxophone (for example), but it gets them very easily confused. Maybe I was expecting too much, but I was hoping for better separation.
Session Players: again, with the AI claim I was expecting them to be able to follow an existing audio track. Recognise the chords and rhythm and play along, you know... in an Artificially Intelligent way... Maybe I just don't know how to use them, but as far as I can see they still require a manually entered chord track. I hope I am just missing something.
Chroma Glow: At first listen it seems interesting, but I will need to understand what it is really doing to the sound. Early days.
It seems a solid update, and any addition to the sound library is always welcome, but AI? I don't think so.
We have come to expect a lot from AI and I don't think this is it.