Just noticed something fairly disappointing if it means what I think it means. Siri was supposed to get a new neural voice that sounds way more natural in iOS 13. Installed it on my iPhone XR today and I think Apple actually only implemented the new voice for canned responses. Not for turn by turn Siri voice directions and not for reading your text messages to you when you’re driving and you say “read my messages.” THAT and turn by turn are the main voice improvements I was hoping for but it seems Siri switches back to robot voice for those! Ugh!
try this experiment: ask Siri to tell you the temperature in Manhattan. Listen to how nice it sounds. Now, with the phone locked, say, “Hey Siri send (whoever) a message.” And when she asks what you want to say, say exactly what Siri just told you like, “It is currently 67 degrees in Manhattan New York.” Now listen to her read that back to you before she sends it. The read-back is robotic and crappy. And so are the messages from other people.
So... did they really just record a few canned answer patterns and numbers using neural TTS and we’re stuck with old Siri for reading anything dynamic?
try this experiment: ask Siri to tell you the temperature in Manhattan. Listen to how nice it sounds. Now, with the phone locked, say, “Hey Siri send (whoever) a message.” And when she asks what you want to say, say exactly what Siri just told you like, “It is currently 67 degrees in Manhattan New York.” Now listen to her read that back to you before she sends it. The read-back is robotic and crappy. And so are the messages from other people.
So... did they really just record a few canned answer patterns and numbers using neural TTS and we’re stuck with old Siri for reading anything dynamic?