Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Schtibbie

macrumors 6502
Original poster
Jan 13, 2007
441
200
Just noticed something fairly disappointing if it means what I think it means. Siri was supposed to get a new neural voice that sounds way more natural in iOS 13. Installed it on my iPhone XR today and I think Apple actually only implemented the new voice for canned responses. Not for turn by turn Siri voice directions and not for reading your text messages to you when you’re driving and you say “read my messages.” THAT and turn by turn are the main voice improvements I was hoping for but it seems Siri switches back to robot voice for those! Ugh!

try this experiment: ask Siri to tell you the temperature in Manhattan. Listen to how nice it sounds. Now, with the phone locked, say, “Hey Siri send (whoever) a message.” And when she asks what you want to say, say exactly what Siri just told you like, “It is currently 67 degrees in Manhattan New York.” Now listen to her read that back to you before she sends it. The read-back is robotic and crappy. And so are the messages from other people.

So... did they really just record a few canned answer patterns and numbers using neural TTS and we’re stuck with old Siri for reading anything dynamic?
 
Just noticed something fairly disappointing if it means what I think it means. Siri was supposed to get a new neural voice that sounds way more natural in iOS 13. Installed it on my iPhone XR today and I think Apple actually only implemented the new voice for canned responses. Not for turn by turn Siri voice directions and not for reading your text messages to you when you’re driving and you say “read my messages.” THAT and turn by turn are the main voice improvements I was hoping for but it seems Siri switches back to robot voice for those! Ugh!

try this experiment: ask Siri to tell you the temperature in Manhattan. Listen to how nice it sounds. Now, with the phone locked, say, “Hey Siri send (whoever) a message.” And when she asks what you want to say, say exactly what Siri just told you like, “It is currently 67 degrees in Manhattan New York.” Now listen to her read that back to you before she sends it. The read-back is robotic and crappy. And so are the messages from other people.

So... did they really just record a few canned answer patterns and numbers using neural TTS and we’re stuck with old Siri for reading anything dynamic?


Siri, like Maps, is a server side update. I'm not sure they said it would be all new on day one. I'm sure it will roll out across the platform (and Maps) now that iOS13 is public. I don't think we had any of the new Siri voice while on beta.
 
Siri, like Maps, is a server side update. I'm not sure they said it would be all new on day one. I'm sure it will roll out across the platform (and Maps) now that iOS13 is public. I don't think we had any of the new Siri voice while on beta.

well I hope that’s the case. However, it does seem like there’s a Siri they DID update (which has a voice served by their servers when you ask Siri an informational question) and a different (old) Siri voice that’s installed on your actual phone - the ones that have to be downloaded when you choose a new voice. I wouldn’t be surprised if for technical reasons (hardware power?) they aren’t actually going to run neural Siri for anything that uses on-phone synthesis.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.