Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple had a 4 year head start on Siri. Lost ground, failed. And it continues to fail harder 10 years later.

I'd be much more optimistic and encouraging if Apple was a small company, just starting out or working on that one breakthrough product. But no, we continue to pay a hefty premium on thousands of dollars of Apple products, only to get a subpar service in return.

It's about time Google and Microsoft slapped them around a bit so that Apple can get a reality check.
Amazon is most likely to be the Apple dic \b \b \b face slapper.
 
Apple will be forced to either adopt and integrate an existing AI chatbot into all of their OSes or overhaul the underlying technology of Siri. Whichever way Cook decides to get Apple out of the hole they dug for themselves, reputational damage is inevitable.

How will Apple integrate all the advances made in AI in the past few years into its closed ecosystem?

It doesn't really look like Cook has a roadmap.

We should give credit for Apple's attention to AI and Machine Learning. Object and word recognition is built seamlessly throughout iOS and macOS and it's very good. You don't know you're using AI and that's how it should be. It should just fit in seamlessly.

But yes, Tim Cook has absolutely neglected Siri and did not see generative AI coming and that's a scary thought if you care about Apple. What else is he missing that a visionary like Steve Jobs would have started moving towards?

Either Apple has been rebuilding Siri from the ground up already and it's just not something that would leak, or they've ignored it and that's a problem. They're not going to build an openAI/Bing/Bard competitor tomorrow and this technology is moving extremely quickly.

The solution will be for Apple to license openAI. Anyone using the openAI Siri Shortcut integration knows how well it works. Apple just needs to pay up and get this integrated into iOS, iPadOS, macOS and watchOS in a hurry. Then maybe they can go about making their own solution available in a few years.
 
  • Like
Reactions: jwdsail
ChatGPT isn’t a voice assistant, how do propose an extremely effect *predictive text* engine do that?


I’m starting to get the bigger picture here, everyone knows chatGPT is the future but no one seems to understand what it actually *is*.
I don't think there is point in trying to explain until the hype wears off. I tried, here and in few other places, but nope, people just want to believe that this is Skynet or something.

Not saying it's completely useless and that no one can ever benefit from it. It can be very useful for some people.

But it won't do stuff on your computer for you and fetch you real time data. You know, 90 percent of stuff that personal assistants do.
 
It's not just randomly connecting words to sound intelligent. It actually delivers useful content if you know what to prompt it with. You seem to truly be missing the potential of this technology. However method it's using to generate text, it's already saved the company I work for hundreds of hours parsing information. It's been able to scour the web for information about our products and compile them into something useful. It's been able to summarize long articles from our blog into newsletter tidbits that lead our customers to those articles. We don't just post what it outputs; we read it and pretty much every time, it's been bang on with the information and accuracy.

One example where it's saved us hours of work: I noticed that our founder has made a dozen appearances on podcasts which could be useful for written content. I had one AI listen to all the podcasts and transcribe them. This step alone saved us days of work. Then I fed those transcriptions to AI and asked it to summarize the key points into an article size. It did so perfectly. Again, days worth of work. Finally, I had it look at all those transcriptions and asked that it give me a dozen notable quotes. Not just random quotes, it found some pretty incredible quotes from our founder, each one something we'll use in social posts and ads.

Mind you, we just started using openAI 2 weeks ago and it's already given our team members back valuable time to work on other projects, time that we were desperate for because we have a small team. This thing is going to absolutely change the world. Anyone not seeing that is going to have to catch up.
No, it’s a VERY good text generator based off feeding it the appropriate prompt, exactly what you said.

It is NOT intelligent, at all, about the context of what it’s outputting. So it’s a very sophisticated autofill. Very effective for certain types of work like you’ve given an example of. It has the capacity today to replace millions of jobs that are email and promotional copy based.

That isn’t, as people seem to think here without saying it explicitly, an actual AGI.

For those interested, the “father of modern linguistics” Noam Chomsky has his take on these technologies in the NYT (attached).
 

Attachments

  • Chomsky on ChatGPT.pdf
    424.7 KB · Views: 462
  • Like
Reactions: jwdsail
Siri's always been a gimmick, it's good for three or four things. Shortcuts help a little.

Apple's still sitting pretty with Jobs' legacy designs, but the flagship products basically sail themselves. iPhone, iPad, Macbook.. iMac and Mac Studio are nice but they seem to be fumbling their production differentiation on the desktop.

They don't need new products. But for the existing ones, there's no innovation. No creativity. In some cases, not even effort. The touch bar, Stage Manager were misfires. 3D Touch (gone) and LiDar didn't add any real value to the iPhone. Siri, Safari are dead in the water.

Dynamic Island is genuinely interesting but under-utilized, and they're headed for under-display sensors anyway.

They're stagnating.
Except for Apple Watch, Cook has really not delivered anything new since he took the helm.

Most of the projects he started have either been delayed or cancelled: Apple Car, modem, AR glasses, and the transition to Apple Silicon.

The lack of progress on Siri highlighted his short-sightedness and inability to understand the tech landscape. Apple has gotten too confident that it can catch up and close the tech gap it has with its competitors with its deep pockets. One would think that the many snags Apple has hit during the development of its own EV should've taught Cook a lesson, but apparently not.

Even Titan can fall. Look how much noise Apple is making just for introducing a new colour. With nothing on the horizon to supplant the iPhone, the sense of desperation is becoming very palpable.
 
Apple should go full-speed ahead with Siri development, especially now that commercial AI is here. It's unforgivable the lack of advancement in this space over the past decade.

Hey, cut them some slack!

They were busy perfecting just the right shade of "Wow, I'm Really Dehydrated" Yellow.

And, when they weren't doing that, they were busy making up excuses for butterfly keyboard failures.

"Can't innovate my a$$"
 
They really need to rebuild Siri. In fact, they should have had a team working on doing that for years.

The guy who ran Siri basically took the wrong approach; he was going for what was state-of-the-art AI at the time.

It's not really about privacy, it's about using a design that basically didn't pan out.

By contrast, Alexa didn't do AI, it did keyword recognition and used AI for voice transcription. That's why Alexa actually mostly works; it's not really AI, but for the AWS use cases it basically behaved as an AI.

Even ChatGPT isn't AI, as AI was thought of a few years ago. ChatGPT technically is a big statistical model of language, which is not AI as it's been understood. That's not to say it doesn't work: it's actually "smarter" than a lot of human beings (using a broad concept of "smarter").

That said, nobody except the pedants care what AI really means. What people want is results, and they don't care whether it's a bunch of low-cost workers doing "AI" or a computer.
Siri does a few things reliably for me: playing music, making phone calls, smart home commands, activating shortcuts, setting timers and alarms. Google Assistant is better at fielding basic questions. Alexa skills like 'place a Pizza Hut order' are cool in theory, Grubhub integration has a lot of potential as a concept. And ChatGPT is head and shoulders above in language comprehension, response, knowledge reproduction, and reasoning.

If you could get all of that into one digital assistant you'd be at the head of the pack.

In the meantime, the best way for Apple to "catch up" would be emotion recognition AI. Have Siri detect and track how you're feeling from voice samples, notice patterns, and respond with automations or suggestions. 'You sound stressed, would you like to take three minutes to meditate', 'I'm glad you're in a good mood, it's 11 a.m., I recommend channeling that energy into work before lunch.' "Focus mode" on steroids. All optional, obviously. Schedule "check-ins", or just ask "Siri, how do I sound?"

The capacity exists, it hasn't been implemented anywhere. Apple has the reputation for privacy and they're easing people into tracking data with Apple Health. It's also a narrow, modular feature..
 
I juust asked ChatGPT how the Boston Bruins were doing this season. I got this as a response:

As an AI language model, I do not have access to real-time information and my training data only goes up to September 2021. However, I can suggest checking out the official NHL website or other sports news outlets for the latest updates on the Boston Bruins' performance in the current season.
Everyone thinks that ChatGPT is magic but it is just a very sophisticated language model that was trained on what is now stale data.

I asked Siri the same question and got a response that the Bruins are in first place in the Atlantic division with a record of 50-11-5 and 105 points.
Screenshot 2023-03-15 at 2.50.05 PM.png
 
Siri does a few things reliably for me: playing music, making phone calls, smart home commands, activating shortcuts, setting timers and alarms. Google Assistant is better at fielding basic questions. Alexa skills like 'place a Pizza Hut order' are cool in theory, Grubhub integration has a lot of potential as a concept. And ChatGPT is head and shoulders above in language comprehension, response, knowledge reproduction, and reasoning.

If you could get all of that into one digital assistant you'd be at the head of the pack.

In the meantime, the best way for Apple to "catch up" would be emotion recognition AI. Have Siri detect and track how you're feeling from voice samples, notice patterns, and respond with automations or suggestions. 'You sound stressed, would you like to take three minutes to meditate', 'I'm glad you're in a good mood, it's 11 a.m., I recommend channeling that energy into work before lunch.' "Focus mode" on steroids. All optional, obviously. Schedule "check-ins", or just ask "Siri, how do I sound?"

The capacity exists, it hasn't been implemented anywhere. Apple has the reputation for privacy and they're easing people into tracking data with Apple Health. It's also a narrow, modular feature..
Why are we hellbent on insisting that the dystopian hellscape of 40+ years of sci-fi stories predict HAS to come true?

Microsoft got in hot water years back for “emotion detecting” cameras at sports stadiums that the public wasn’t aware of.

There’s a Silicon Valley freak selling a service “using AI” to detect whether the person calling 911 is actually the person who committed the crime based on “emotion detecting AI”.

I don’t want any of these things handed over as the excuse for authoritarianism. “It wasn’t us that locked you away, the AI said with 99.999998% that it was you”.

Emotional detection will very rapidly be turned to emotional manipulation to be monetized, and much worse.
 
No, it’s a VERY good text generator based off feeding it the appropriate prompt, exactly what you said.

It is NOT intelligent, at all, about the context of what it’s outputting. So it’s a very sophisticated autofill. Very effective for certain types of work like you’ve given an example of. It has the capacity today to replace millions of jobs that are email and promotional copy based.

That isn’t, as people seem to think here without saying it explicitly, an actual AGI.

For those interested, the “father of modern linguistics” Noam Chomsky has his take on these technologies in the NYT (attached).

I'm not suggesting that openAI is self aware. Intelligence isn't self awareness, it's applying creative insight into available information and coming up with feedback that isn't immediately obvious, based on that creative thinking.

I taught openAI about a fictitious bird that doesn't know it can fly, but when it's confronted by a predator, it instinctively flies away only to discover that it can't fly and plunges to its death. At first, the AI kept insisting that this bird wasn't real and there are no records of it. When I told it that I was an ornithologist studying this bird, it began to trust my new input. In the little information I gave it, it put together risk factors for this bird completely unprompted and supplied very believable theories on why the bird hadn't been discovered. It also expressed something akin to emotion or compassion for the bird – I'm not suggesting it has emotion, only that it put together what we see as emotion and compassion in a way that isn't just a string of associated words. It wasn't just putting these words together, it was making creative observations. That's very very close to what we consider intelligence.

Screenshot 2023-02-24 at 3.01.03 PM.png
 
Last edited:
  • Like
Reactions: ErikGrim
This.

Apple is prideful in that it wants to proclaim that it makes all of its own software (whilst hiding that it uses cloud storage and compute power from its tech titan rivals), but totally agree.

1. Licence ChatGPT at virtually any price.

Because Chat GPT is obviously a game changing technology and if Apple doesn't have an answer for Chat GPT-4 by next year (at the very latest) it's in trouble.

All of this swiping and pressing is going to go away when we start to have our own personal EA's who sort our lives out for us.

And I don't have any faith that Apple has anything close to Chat GPT-4 up its sleeves. AI/ML at Apple is obviously to do around photography and video imagery - and presumably all aligned around work to do with its upcoming VR/AVR product.

2. Start a priority red project to integrate chatGTP. Use a private instance for Apple etc. so users know that it's privacy preserving as you say.

3. Create a robust set of guardrails and limits.

This will deliberately limit Chat GPT. However, Apple just needs to have its existing product vision for Siri, but for it to actually be useful and work.

4. Integrate it into the existing Siri voice recognition and speech generation systems.

5. Then start to work out a way for it to know about who you are from the data on your phone and iCloud and have it remember interactions i.e. a persistent memory.

From what I've experienced of Chat GPT-4, I can't see why this wouldn't be possible. A persistent memory for your interactions with Chat-GPT would obviously cost a fortune in storage, so I suspect that's the only reason why we're not seeing it in Chat-GPT right now.


Unfortunately, the plan is foiled at step 1, because "Microsoft teams up with OpenAI to exclusively license GPT-3 language model".. And GPT-4, as well.




So, perhaps Apple could call OpenAI up and negotiate for GPT-5, 6, 7, 8... But 3 and 4 are not going to happen. Unless Apple and Microsoft pen a deal? Would Apple want that? Would MS want that?
 
Why are we hellbent on insisting that the dystopian hellscape of 40+ years of sci-fi stories predict HAS to come true?

Microsoft got in hot water years back for “emotion detecting” cameras at sports stadiums that the public wasn’t aware of.

There’s a Silicon Valley freak selling a service “using AI” to detect whether the person calling 911 is actually the person who committed the crime based on “emotion detecting AI”.

I don’t want any of these things handed over as the excuse for authoritarianism. “It wasn’t us that locked you away, the AI said with 99.999998% that it was you”.

Emotional detection will very rapidly be turned to emotional manipulation to be monetized, and much worse.

Yeah, it does seem like every tech news story these days makes me wonder if all these developers somehow never watched 2001, War Games, Terminator, Blade Runner.. and on and on..
 
  • Like
Reactions: gusmula
No, it’s a VERY good text generator based off feeding it the appropriate prompt, exactly what you said.

It is NOT intelligent, at all, about the context of what it’s outputting. So it’s a very sophisticated autofill. Very effective for certain types of work like you’ve given an example of. It has the capacity today to replace millions of jobs that are email and promotional copy based.

That isn’t, as people seem to think here without saying it explicitly, an actual AGI.

For those interested, the “father of modern linguistics” Noam Chomsky has his take on these technologies in the NYT (attached).

Here's another one that's flippin mindblowing!

I was a photographer for the Toronto Film Festival. I came across Joaquin Phoenix alone in a press junket room. He seemed sick, and didn't talk to anyone who walked by. He just seemed to be fidgeting there by himself. I left the room, went downstairs for a coffee and as I came up, I heard him laughing by himself. He was erratic and seemed to have lost his mind.

I asked openAI what it thought was wrong with him. It came up with the correct answer:

Screenshot 2023-03-15 at 2.58.30 PM.png


This isn't just word association. This is creative "thinking". It knew to look at Joaquin Phoenix's history of method acting and then determine the correct character he was playing. He played other characters for which he lost weight. Nowhere did I suggest that he was in character. Anyone else would have just assumed he was sick or having a bad day or whatever. Even I didn't make the association at the time. I only figured it out later after my brother told me he had been cast as the Joker.

This is creative problem solving. Humans make associations to available information that we have to come to solutions for problems presented to us. That seems to be what's happening here. Dismissing it as "word associations" would be dismissing how humans understand information and how creative thinking works in humans.
 
I'm not suggesting that openAI is self aware. Intelligence isn't self awareness, it's applying creative insight into available information and coming up with feedback that isn't immediately obvious, based on that creative thinking.

I taught openAI about a fictitious bird that doesn't know it can fly, but when it's confronted by a predator, it instinctively flies away only to discover that it can't fly and plunges to its death. At first, the AI kept insisting that this bird wasn't real and there are no records of it. When I told it that I was an ornithologist studying this bird, it began to trust my new input. In the little information I gave it, it put together risk factors for this bird completely unprompted and supplied very believable theories on why the bird hadn't been discovered. It also expressed something akin to emotion or compassion for the bird – I'm not suggesting it has emotion, only that it put together what we see as emotion and compassion in a way that isn't just a string of associated words. It wasn't just putting these words together, it was making creative observations. That's very very close to what we consider intelligence.
No, your brain is tricking you into thinking that *language* is intelligence.

“Considered” is doing the heavy lifting here. What happens with every new hype cycle in tech is the actual *definition* of words lose their meaning. Machine learning was “can replicate this process with incredible speed based on training”. Now it’s a buzz word in marketing. AI is the next chapter because people can’t understand the difference between AI and AGI because what they’re seeing for output is convincing to them.


There is no *intelligence* in a language model, it’s just very, very good at taking your prompts and presenting what you’ve asked for. It doesn’t even know what a bird is, but it’s been trained on terabytes of the *connection* between keywords like “bird”, “ornithologist” etc.

Again, if you are indeed interested in distinction I’m trying to make here, a previous post of mine has a PDF of Noam Chomsky (“the father of modern linguists as a field of study) on ChatGPT from the NYT.

I’m not trying to argue with you, but this is a perfect encapsulation of why this tech (while being extremely useful) is also extremely dangerous.

If people don’t understand that there aren’t any actual “smarts” under the hood, they’re easily able to take whatever an AI puts out as actual fact.

I had Microsoft reps show me a demo where I asked how to implement a feature in the M365 administration environment. The Bing Ai spit out an impressive description of it, the steps to implement it, and a link to the documentation. After the call I found out that the steps weren’t real, the link was 404’d because it didn’t actually exist, and the feature all of this was about didn’t even exist at Microsoft. But their reps were just beaming about how useful this would be to me…
 
View attachment 2174065View attachment 2174064“Creative assistant”

I knew the bar in tech journalism was already low, but come on. ChatGPT is a language model, it is not a repository of knowledge.

*Everyone* and their mother seems to think because it can output proper language, that the meaning of that output is correct…that’s not how any of this works.


Also, regarding Siri. I could have sworn that a few years ago at WWDC it was discussed how the entire Siri project was overhauled from its original base. So why are we seeing quotes from a guy who worked on it in 2014? If he’s describing the state of Siri when he was working on it, that was 9 years ago inserted into a story that would give readers the impression that’s how it works today.
Edit: This guy left Apple in 2016 to start his own AI business…come on.

Now this entire thread is going to be the usual whiners, whining based on the idea that Siri is the same heap of code it was in 2014…
GPT's repository of knowledge is incredibly immense. And GPT 4's is even bigger. I've asked it questions about laws, had it code in languages that nobody uses anymore, and list relevant info up until the end of its learning cycle. Maybe you don't mean that it's not a repository of knowledge but something else?
 
Oh no, it definitely has extremely useful use cases.

My fear is that when not used for work, but inevitably gets used for things it’s not designed for (academics, NEWS REPORTING, etc) it’s going to be a massive problem.

ChatGPT is not a search engine, and has no way to verify its own “factual” output.

When used in a confined space, to do what it’s designed for (generate language based on criteria) it’s great.

But it is NOT a *knowledge* source, at all.
Again, what do you refer to the actual "knowledge base" that it contains if not a knowledge source?
 
ChatGPT isn’t a voice assistant, how do propose an extremely effect *predictive text* engine do that?


I’m starting to get the bigger picture here, everyone knows chatGPT is the future but no one seems to understand what it actually *is*.
I think you've led yourself to minimize ChatGPT's abilities based on how it has been described to you. ChatGPT can do literally anything a voice assistant can do if it had access to the API calls to do so. Do you honestly believe that an AI that can mimic being a Linux csh interface just because you asked it to would not understand commands such as "Unlock the front door" and know to send that request over to HomeKit?

I can assure you that I understand how GPT works down to how it learns vs. a rule based AI. Its predictive text capabilities lean more towards its interaction, its understanding of complex questions and emotive tones. However, your assumption that this is its only functionality is far, far off.
 
  • Like
Reactions: Oberhorst
Here's a simple example of something that Siri cannot understand, but I bet a ChatGPT could - Hey Siri, play some quiet jazz dinner music without vocals.

I've never been successful asking for it, or for any other genre when I request music with no vocals. Yes, I know I should request specific genres that are usually sans vocals, and that's what I do, but really, I'd like to hear music from many genres, just the examples that are vocals free. Siri is too dumb to comprehend. ChatGPT, and Bing, definitely seem like they could handle it.

(also, if anyone has a magic incantation they've used for this with Siri, please reply! other than to simply request 'deep house' or etc)

I read the command exactly as you wrote it and Siri responded by playing Miles Davis that didn’t have vocals.
 
  • Like
Reactions: michaelant
It's wild how early Siri came out, but how useless it's become. Competition is a good thing. Let's hope Apple can catch Siri up to today's standards.
I’m trying to think back to what Siri was first like. Obviously it has improved since then. But it is disappointing that it hasn’t substantially improved.

Siri can do many useful things, but I find it fails often enough at simple tasks it can do that I just give up on it.
 
  • Like
Reactions: performa_6400
No, it’s a VERY good text generator based off feeding it the appropriate prompt, exactly what you said.

It is NOT intelligent, at all, about the context of what it’s outputting. So it’s a very sophisticated autofill. Very effective for certain types of work like you’ve given an example of. It has the capacity today to replace millions of jobs that are email and promotional copy based.

That isn’t, as people seem to think here without saying it explicitly, an actual AGI.

For those interested, the “father of modern linguistics” Noam Chomsky has his take on these technologies in the NYT (attached).
Noam Chomsky is not the guy to teach you about how AI works...he has an opinion on where it's going and the philosophy. But you've misinterpreted that as a technical understanding of its capabilities.

By the way, ChatGPT says it can do it lol.

1678908570646.png
 
Yes, I have tried it, I have seen what it's capable of, and that is why I've said that it's still suffering from the same known flaws and limitations of the LLMs. It looks impressive due to amount of data it's been fed and processing power dedicated to it, but it seems like more data and more processing power will not solve that.

Assistants like Siri also have to know how to work with real time data, and LLMs like GPT are still incapable of doing that. Picture related:

View attachment 2174054
Lol, they are not incapable of doing it. OpenAI chose to limit it like that... ChatGPT is not allowed to read current content on the internet. AI does not have the "decades old limitations" you think it has. Apple can do whatever it wants with AI to make a better Siri.

You can copy and paste whatever current info you want ChatGPT to take into account with it's responses and it will remember them for that session. There's nothing preventing AI from being able to absorb real-time info other than safeguards that are trying to keep it from turning into Skynet.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.