Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Someone who clearly hasn't had windows 11 automatically opt in their data to one drive backup and then nag to charge for space and threaten to delete your data
Exactly. And MS apps on macOS are just as bad with ignoring, neutralizing and outright hijacking macOS conventions and settings and insidiously migrating files from iCloud to OneDrive. macOS feature alerts don’t come close to this level of user disrespect.
 
  • Like
Reactions: throAU
Validate the statistical crack head's ramblings which is considerably harder and more time consuming than the initial task you asked it to perform in the first place. This incurs the cost of the LLM and additional time over doing the work yourself.

It’s actually not in a lot of cases and getting better. This is what test environments are for.

Can’t trust humans without validation these days either.

Change request example above: I’m not asking it to generate a template. It’s actually picking up dependencies and concerns without needing to be prompted. Tweak a little for my specific environment and it’s actually pretty bang on so far as I’ve tested against **** I already wrote.

If you aren’t regularly keeping track of how things are progressing you’re really in for a rude awakening.

I too was an AI/ML skeptic but the newer models are getting really good, even vs just a few months ago. O1 for example includes reasoning and actually tries validating its answer and re trying until it comes up with its best attempt.
 
Last edited:
  • Like
Reactions: heretiq
It’s actually not in a lot of cases and getting better. This is what test environments are for.

Change request example above: I’m not asking it to generate a template. It’s actually picking up dependencies and concerns without needing to be prompted. Tweak a little for my specific environment and it’s actually pretty bang on so far as I’ve tested against **** I already wrote.

If you aren’t keeping track of how things are progressing you’re really in for a rude awakening.

I think you're missing the point entirely. Lets try a philosophical approach with two points:

Firstly, humans are non-deterministic. We use computers to add determinism to our work and arguments and this allows cohesive work to be done with consensus as the arbiter is fundamentally deterministic. This is a fundamental improvement over the human condition. LLMs are non-deterministic machines and add nothing to that other than another set of non-determinism. They do not produce coherent information or utility by nature. They make us either worse or no different.

My favourite extreme example of this is the calculator. A huge advancement for the human race. Alas I see people using LLMs as a calculator. And they make both simple mistakes (misunderstanding place-value systems) to making process changes which confuse the user and have no mathematical validity.

Testing or not, it's not magic. Even the best models (OpenAI o1 for example) make hideous mistakes and assumptions even when constrained because they must produce a result. If you're testing then you have by nature a subset of the entire domain of problems. Your tests pass but the things you don't test won't pass. And there is no way of writing a proof using natural languages as they are not regular or otherwise formally describable grammars. Consider legalese as a subset, which is still open to debate and interpretation.

Further on that route, if they say "I don't know" or have an inferred confidence level then that will destroy all trust in the technology. It is currently profitable to produce garbage with confidence.

I too was an AI/ML skeptic but the newer models are getting really good, even vs just a few months ago.

Secondly, the progress is improving but it's an asymptote. Every step forward costs $5 billion and gets us half way to the point it is usably competent from where it is. However that goal is impossible to reach because investors (one of whom I work for) do not wish to spend any more money with so little functional return. A trite financial analysis, which annoyingly I don't have handy as I'm about to get on a plane, of Microsoft's CoPilot pricing and energy cost projection suggests that the only way they make any money is if people don't actually use it. OpenAI are running at a completely unsustainable loss. Also as mentioned the corpus of training data is nearly exhausted. That's not a winning proposition.

There is no outcome here other than failure in the long run. It's not economically, socially or logically viable. Betting your future on it is unwise.

Some good things will be shaken out of the tree along the way but they are mostly stuff in the sidelines which managed to leverage the hype a little. And you probably haven't heard of them.
 
Someone who clearly hasn't had windows 11 automatically opt in their data to one drive backup and then nag to charge for space and threaten to delete your data
i have never had that, but then I have never connected my Windows machine to a MS account.
 
  • Like
Reactions: klasma
The specific issue for me with this is that without my permission, Apple decided to take 5GB+ of my storage for something I'm never, ever, going to use. And, I paid for my storage, so I do take great offense to this.

I've never interacted with Apple Intelligence. So before, the files weren't even on my system, but now, it's in a state where the flip of a switch "arms" this system. What can flip this switch? Surely a future Mac OS update won't just turn it on? Right? Right???

I'm so disappointed in Apple. Mac OS 15 has been a disaster dumpster-fire. Apple Intelligence wasn't even a thing when I bought my M3. I 100% would not spend this money now. I'd get a refund if I could.
 
  • Like
Reactions: SuzyM70
I think you're missing the point entirely. Lets try a philosophical approach with two points:

Firstly, humans are non-deterministic. We use computers to add determinism to our work and arguments and this allows cohesive work to be done with consensus as the arbiter is fundamentally deterministic. This is a fundamental improvement over the human condition. LLMs are non-deterministic machines and add nothing to that other than another set of non-determinism. They do not produce coherent information or utility by nature. They make us either worse or no different.

My favourite extreme example of this is the calculator. A huge advancement for the human race. Alas I see people using LLMs as a calculator. And they make both simple mistakes (misunderstanding place-value systems) to making process changes which confuse the user and have no mathematical validity.

Testing or not, it's not magic. Even the best models (OpenAI o1 for example) make hideous mistakes and assumptions even when constrained because they must produce a result. If you're testing then you have by nature a subset of the entire domain of problems. Your tests pass but the things you don't test won't pass. And there is no way of writing a proof using natural languages as they are not regular or otherwise formally describable grammars. Consider legalese as a subset, which is still open to debate and interpretation.

Further on that route, if they say "I don't know" or have an inferred confidence level then that will destroy all trust in the technology. It is currently profitable to produce garbage with confidence.



Secondly, the progress is improving but it's an asymptote. Every step forward costs $5 billion and gets us half way to the point it is usably competent from where it is. However that goal is impossible to reach because investors (one of whom I work for) do not wish to spend any more money with so little functional return. A trite financial analysis, which annoyingly I don't have handy as I'm about to get on a plane, of Microsoft's CoPilot pricing and energy cost projection suggests that the only way they make any money is if people don't actually use it. OpenAI are running at a completely unsustainable loss. Also as mentioned the corpus of training data is nearly exhausted. That's not a winning proposition.

There is no outcome here other than failure in the long run. It's not economically, socially or logically viable. Betting your future on it is unwise.

Some good things will be shaken out of the tree along the way but they are mostly stuff in the sidelines which managed to leverage the hype a little. And you probably haven't heard of them.
I guess the entire computing industry is wrong then. Lol
 
  • Like
Reactions: heretiq
The specific issue for me with this is that without my permission, Apple decided to take 5GB+ of my storage for something I'm never, ever, going to use. And, I paid for my storage, so I do take great offense to this.

I've never interacted with Apple Intelligence. So before, the files weren't even on my system, but now, it's in a state where the flip of a switch "arms" this system. What can flip this switch? Surely a future Mac OS update won't just turn it on? Right? Right???

I'm so disappointed in Apple. Mac OS 15 has been a disaster dumpster-fire. Apple Intelligence wasn't even a thing when I bought my M3. I 100% would not spend this money now. I'd get a refund if I could.
You could leave your m3 on the os it shipped with or roll back to what you purchased if it bugs you that much.
 
  • Like
Reactions: heretiq
The specific issue for me with this is that without my permission, Apple decided to take 5GB+ of my storage for something I'm never, ever, going to use. And, I paid for my storage, so I do take great offense to this.
This really really sucks for people with 256 GB of storage. I have 1 TB and I can afford to lose 5 GB, but that 256 which is really more like 180 with the OS and apps installed takes quite a hit. And Apple Intelligence is only going to keep growing. I really hope enough people complain for the ‘off’ switch to delete the garbage.

I suppose we in the EU won't recieve this.
If you’re on a Mac with a Mx CPU you have it, or you can have it. I was very surprised when it asked me if I wanted it, in fact surprised enough to say yes thinking ‘sure, snowball’s chance in hell’ and now I have a whole range of features to not use. The proofing tool could turn out useful with updates, right now it won’t even mark changes in Pages and selecting too much text crashes it because it proofs everything at once. The rest… you’re not missing much.
 
I think you're missing the point entirely. Lets try a philosophical approach with two points:

Firstly, humans are non-deterministic. We use computers to add determinism to our work and arguments and this allows cohesive work to be done with consensus as the arbiter is fundamentally deterministic. This is a fundamental improvement over the human condition. LLMs are non-deterministic machines and add nothing to that other than another set of non-determinism. They do not produce coherent information or utility by nature. They make us either worse or no different.

My favourite extreme example of this is the calculator. A huge advancement for the human race. Alas I see people using LLMs as a calculator. And they make both simple mistakes (misunderstanding place-value systems) to making process changes which confuse the user and have no mathematical validity.

Testing or not, it's not magic. Even the best models (OpenAI o1 for example) make hideous mistakes and assumptions even when constrained because they must produce a result. If you're testing then you have by nature a subset of the entire domain of problems. Your tests pass but the things you don't test won't pass. And there is no way of writing a proof using natural languages as they are not regular or otherwise formally describable grammars. Consider legalese as a subset, which is still open to debate and interpretation.

Further on that route, if they say "I don't know" or have an inferred confidence level then that will destroy all trust in the technology. It is currently profitable to produce garbage with confidence.



Secondly, the progress is improving but it's an asymptote. Every step forward costs $5 billion and gets us half way to the point it is usably competent from where it is. However that goal is impossible to reach because investors (one of whom I work for) do not wish to spend any more money with so little functional return. A trite financial analysis, which annoyingly I don't have handy as I'm about to get on a plane, of Microsoft's CoPilot pricing and energy cost projection suggests that the only way they make any money is if people don't actually use it. OpenAI are running at a completely unsustainable loss. Also as mentioned the corpus of training data is nearly exhausted. That's not a winning proposition.

There is no outcome here other than failure in the long run. It's not economically, socially or logically viable. Betting your future on it is unwise.

Some good things will be shaken out of the tree along the way but they are mostly stuff in the sidelines which managed to leverage the hype a little. And you probably haven't heard of them.
Humans have been performing cohesive work through consensus well before computers existed — so computers are not a prerequisite for human collaboration. However, computers can and have facilitated human collaboration while also aiding human productivity. LLMs are a significant progression of computing technology and have already demonstrated capacity to facilitate improvements in both collaboration and productivity — largely through the LLM facilitating very effective natural language and conversational interfaces.

Are there problems? Absolutely. First and foremost, hallucinations which you mentioned (LLMs making up answers that are fictitious / wrong) are a feature of LLMs — just like they are with humans. We’ve all found ourselves riffing with colleagues on a wide-ranging subject, convinced that we’ve solved a problem only to realize later on reflection that we got something wrong that was missed in the moment or we were convinced about in the moment. That is an hallucination. So what do we do? We regroup and revisit the conversation with new information and continue the cycle of collaboration, iteration and advancement.

This same dynamic is already possible with LLMs by informing the LLM of an error and asking it to reconsider. Of course the issue is that a person may not know that the LLM output contains an error which is a very serious problem. This is recognized and is being addressed structurally with Agent based systems where Agents are tasked with validating LLM output before it is presented to the user or used in critical reasoning and feeding the results of that validation back to the LLM allowing the LLM to self-correct. This approach is demonstrated to reduces hallucination. Even if does not completely eliminate hallucinations, the quality of the dialogue continues to improve to approach that with a group of experts. This is a huge productivity advancement because it democratizes access to expert counsel for lots of people.

Another big issue is indeed Energy consumption as you mentioned. This is also being addressed through more efficient processors and architectures and edge/on-device computing.

The issues raised are legitimate, but I think pronouncements that this technology is a dead-end in the long-run is premature. I would like to see additional support for this prediction beyond the simplified philosophical arguments presented.
 
Last edited:
  • Like
Reactions: adrianlondon
I guess the entire computing industry is wrong then. Lol

The entire computing industry isn't interested or using this. In fact very little of it is interested in it. It's mostly FAANG class companies trying to push it as the next frontier, mostly because they have no other way of expanding market share and attention on short product cycles. This leads to an interest arms race.
 
  • Like
Reactions: navaira
Humans have been performing cohesive work through consensus well before computers existed — so computers are not a prerequisite for human collaboration. However, computers can and have facilitated human collaboration while also aiding human productivity. LLMs are a significant progression of computing technology and have already demonstrated capacity to facilitate improvements in both collaboration and productivity — largely through the LLM facilitating very effective natural language and conversational interfaces.

Are there problems? Absolutely. First and foremost, hallucinations which you mentioned (LLMs making up answers that are fictitious / wrong) are a feature of LLMs — just like they are with humans. We’ve all found ourselves riffing with colleagues on a wide-ranging subject, convinced that we’ve solved a problem only to realize later on reflection that we got something wrong that was missed in the moment or we were convinced about in the moment. That is an hallucination. So what do we do? We regroup and revisit the conversation with new information and continue the cycle of collaboration, iteration and advancement.

This same dynamic is already possible with LLMs by informing the LLM of an error and asking it to reconsider. Of course the issue is that a person may not know that the LLM output contains an error which is a very serious problem. This is recognized and is being addressed structurally with Agent based systems where Agents are tasked with validating LLM output before it is presented to the user or used in critical reasoning and feeding the results of that validation back to the LLM allowing the LLM to self-correct. This approach is demonstrated to reduces hallucination. Even if does not completely eliminate hallucinations, the quality of the dialogue continues to improve to approach that with a group of experts. This is a huge productivity advancement because it democratizes access to expert counsel for lots of people.

Another big issue is indeed Energy consumption as you mentioned. This is also being addressed through more efficient processors and architectures and edge/on-device computing.

The issues raised are legitimate, but I think pronouncements that this technology is a dead-end in the long-run is premature. I would like to see additional support for this prediction beyond the simplified philosophical arguments presented.

I think the arguments were presented with enough technical merit as well as philosophical ones.

Let's try a financial one too. My area.

Who is going to pay money for these models to be continually improved?

So Dario Amodei from Anthropic suggested that we're at the $1 billion training cost level for current models and are likely to see $10 billion in future over a linear timescale. The GPT-4 had approximately $100m costs. This is an exponential cost. So if we take Amodei's projection into account of just this year. What does this mean?

Firstly revenue. Well Anthropic as an example charge $18 a user/month. The model lifetime is approximately 6 months at the moment so that's a total user earning of $108 per model. At $1 billion that means you need 9.3 million active paying monthly users to break even on just the training costs. They have API customers who bankroll them mostly at the moment though. That does not include overheads, model execution costs (which are not trivial) and staffing. How many active paying users do they have? Nobody knows but it's not that. They are propped up on private investment and ownership, as are OpenAI.

Now Slack is a fine example of one of their customers. Slack is currently trying to hide their terrible AI uptake. They gave it away as a trial for free. When it came to asking for money there wasn't a lot of utility because quite frankly most people don't need it plugged into a comms tool, so people didn't pay for it. So there goes a fractional revenue stream from the API customer. The same is true of other customers.

Using Microsoft as a case study, they recently bundled CoPilot and associated costs into Asia distributed O365 subscriptions because they can get away with it. No one will pay for it otherwise. This is to the point I know a 100,000 seat O365 tied org that was marketed by Microsoft as "AI this and that" but decided to dump it when Microsoft stopped subsidising it and asking for money. Because rationally it couldn't be justified as a gain to the business. That's a lot of money down the toilet.

Uptake is very very low because really it's not something 99.9% of the population think adds value to their lives, even if a few people think they get some value from it. Most people aren't interested, don't have the budget for it or don't understand it. Add to that we are in a time of geopolitical uncertainty and risk of regulation which likely constrains risk behaviours of non trivial customers.

Anyway now away from revenue and to costs. As per Amodei, the training costs are exponential. This is not constrained by what people think it is which is arbitrary hope that the research and next model is going to be competent enough to be marketable. The truth is left in the philosophical points above. Really the models have compute limitations defined by physics. Consider parameter precision on the floating point registers required versus silicon space, versus power usage, versus dissipation and things get quite a bit complicated to rationalise. The workloads are horizontally scalable but cannot likely be clock or die scaled much further. Most of what we do now when it hits that wall throttles to keep TDP down, but to sustain compute for model generation we need to run this stuff flat out 24/7. That means we now have to use more energy getting rid of the heat, smaller processes to reduce power loss. And that costs even more money. The biggest winner so far here is TSMC but this may damage them and associated industries if and when it goes down. Really a linear scaling of model complexity doesn't always lead to a linear cost either. Hence the exponential model. Exponential models are always bad when it comes to cost.

Eventually the market becomes saturated with supply (models / companies offering services) and constrained (energy / physics / cost / regulatory capture) and the financial model collapses. No individual company (OpenAI / anthropic) have enough capital or can borrow any to hire compute to do work because they traded minimal lasting customers between them.

At that point, all the big investors disappear like the rats they are and dump all the stock on unwise late investors who pay for the.

Everything we rely on from them as the guardians of the proprietary APIs we consume disappears into thin air leaving clients with nothing to talk to and their respective businesses leveraging it failing.

There's only risk here.
 
  • Like
Reactions: navaira
The entire computing industry isn't interested or using this. In fact very little of it is interested in it. It's mostly FAANG class companies trying to push it as the next frontier, mostly because they have no other way of expanding market share and attention on short product cycles. This leads to an interest arms race.
The segment of the computing industry I have visibility in is disproportionately interested in custom LLM-based AI software solutions vs conventional custom software solutions. I run operations for a global software development company. Since January 1 we have closed deals on 4 LLM-based AI projects for 4 different companies (manufacturing, healthcare, finance and supply chain). All 4 projects are significant, transformational for each company, backed by approved business cases and were 100% conceived by the companies themselves. Our pipeline growth is primarily due to AI-based projects and we are hiring to keep pace with the demand.
 
So far the only feature useful to me that Apple AI provides is Clean Up in the Photos app. I got it on the Mac, and don't want it on my iPhone anyway because I want precision using a mouse.

All other features like Image Playground are just gimmicks, while the rest of them I can just use ChatGPT or Gemini (e.g. Proofread)
I mostly agree. I find Clean Up to be a great feature. I also like the Writing Tools. Playground is gimmicky, and luckily it recognizes that as "beta". The Playground icon is so weird and kiddish; very un-Apple to me.
 
  • Like
Reactions: rungxanh2901
I think the arguments were presented with enough technical merit as well as philosophical ones.

Let's try a financial one too. My area.

Who is going to pay money for these models to be continually improved?

So Dario Amodei from Anthropic suggested that we're at the $1 billion training cost level for current models and are likely to see $10 billion in future over a linear timescale. The GPT-4 had approximately $100m costs. This is an exponential cost. So if we take Amodei's projection into account of just this year. What does this mean?

Firstly revenue. Well Anthropic as an example charge $18 a user/month. The model lifetime is approximately 6 months at the moment so that's a total user earning of $108 per model. At $1 billion that means you need 9.3 million active paying monthly users to break even on just the training costs. They have API customers who bankroll them mostly at the moment though. That does not include overheads, model execution costs (which are not trivial) and staffing. How many active paying users do they have? Nobody knows but it's not that. They are propped up on private investment and ownership, as are OpenAI.

Now Slack is a fine example of one of their customers. Slack is currently trying to hide their terrible AI uptake. They gave it away as a trial for free. When it came to asking for money there wasn't a lot of utility because quite frankly most people don't need it plugged into a comms tool, so people didn't pay for it. So there goes a fractional revenue stream from the API customer. The same is true of other customers.

Using Microsoft as a case study, they recently bundled CoPilot and associated costs into Asia distributed O365 subscriptions because they can get away with it. No one will pay for it otherwise. This is to the point I know a 100,000 seat O365 tied org that was marketed by Microsoft as "AI this and that" but decided to dump it when Microsoft stopped subsidising it and asking for money. Because rationally it couldn't be justified as a gain to the business. That's a lot of money down the toilet.

Uptake is very very low because really it's not something 99.9% of the population think adds value to their lives, even if a few people think they get some value from it. Most people aren't interested, don't have the budget for it or don't understand it. Add to that we are in a time of geopolitical uncertainty and risk of regulation which likely constrains risk behaviours of non trivial customers.

Anyway now away from revenue and to costs. As per Amodei, the training costs are exponential. This is not constrained by what people think it is which is arbitrary hope that the research and next model is going to be competent enough to be marketable. The truth is left in the philosophical points above. Really the models have compute limitations defined by physics. Consider parameter precision on the floating point registers required versus silicon space, versus power usage, versus dissipation and things get quite a bit complicated to rationalise. The workloads are horizontally scalable but cannot likely be clock or die scaled much further. Most of what we do now when it hits that wall throttles to keep TDP down, but to sustain compute for model generation we need to run this stuff flat out 24/7. That means we now have to use more energy getting rid of the heat, smaller processes to reduce power loss. And that costs even more money. The biggest winner so far here is TSMC but this may damage them and associated industries if and when it goes down. Really a linear scaling of model complexity doesn't always lead to a linear cost either. Hence the exponential model. Exponential models are always bad when it comes to cost.

Eventually the market becomes saturated with supply (models / companies offering services) and constrained (energy / physics / cost / regulatory capture) and the financial model collapses. No individual company (OpenAI / anthropic) have enough capital or can borrow any to hire compute to do work because they traded minimal lasting customers between them.

At that point, all the big investors disappear like the rats they are and dump all the stock on unwise late investors who pay for the.

Everything we rely on from them as the guardians of the proprietary APIs we consume disappears into thin air leaving clients with nothing to talk to and their respective businesses leveraging it failing.

There's only risk here.
Current models are good enough for production application for multiple industries and countless use cases. Further, Retrieval Augmentation Generation (adding your own data using files, databases, etc.) makes additional training unnecessary for most applications.

The training you’re referring to will advance the capability of current models but we’ve already hit critical mass. The money is already coming from paying customers — companies are paying development companies (my business) to build custom systems and consumers are buying subscription access to Anthropic, ChatGPT, Perplexity and others.

Further cloud based operating costs from major platform providers are on a continuous decline which will allow service providers to balance profit growth and demand stimulation. I’m not suggesting that success is certain, only that the trends I’m seeing do not support a perspective that this technology has no future.
 
The segment of the computing industry I have visibility in is disproportionately interested in custom LLM-based AI software solutions vs conventional custom software solutions. I run operations for a global software development company. Since January 1 we have closed deals on 4 LLM-based AI projects for 4 different companies (manufacturing, healthcare, finance and supply chain). All 4 projects are significant, transformational for each company, backed by approved business cases and were 100% conceived by the companies themselves. Our pipeline growth is primarily due to AI-based projects and we are hiring to keep pace with the demand.

Remind me where this is in 2 years.

There are whole areas of just finance which have bigger than Apple sized chunks of money floating around with absolutely no use case or interest in LLMs at all. In fact they see them as a risk due to the unconstrained risks.
 
I mostly agree. I find Clean Up to be a great feature. I also like the Writing Tools. Playground is gimmicky, and luckily it recognizes that as "beta". The Playground icon is so weird and kiddish; very un-Apple to me.

Clean Up and Lightroom's equivalent is exactly where the value is in this class of technology. Where the result doesn't necessarily matter but the result and perception of it is good enough.
 
  • Like
Reactions: msackey
Current models are good enough for production application for multiple industries and countless use cases. Further, Retrieval Augmentation Generation (adding your own data using files, databases, etc.) makes additional training unnecessary for most applications.

I've seen production applications pulled from this. Particularly document retrieval and summarisation from commercial services because it doesn't work and puts customers and staff at risk of making regulatory mistakes (in finance). YMMV.

The training you’re referring to will advance the capability of current models but we’ve already hit critical mass. The money is already coming from paying customers — companies are paying development companies (my business) to build custom systems and consumers are buying subscription access to Anthropic, ChatGPT, Perplexity and others.

You're the middle man. The paying customers do not exist in the quantities hoped for nor do they want to pay for the product. You're probably going to do fine out of it at least though 🤣

Further cloud based operating costs from major platform providers are on a continuous decline which will allow service providers to balance profit growth and demand stimulation. I’m not suggesting that success is certain, only that the trends I’m seeing do not support a perspective that this technology has no future.

I think you need to look at GPU pricing. It's flat. And that's because there's oversupply of GPUs at the moment.

NVDA going to hurt at some point.
 
This better not be a thing. Both my phone and mac today...

View attachment 2472237

If it comes back again then this is Windows 11 level of nag.

I think it's enough to go into the settings for Apple Intelligence and just leave it without changing anything to never see this again. Maybe after an update or on a new device.

I also don't use Siri, Heath and Screen Time and get a reminder once on a new device or so.

At the moment I have this on my Mac:

Screenshot 2025-01-15 at 20.17.44.png


I clicked on open and it is gone. Just this came:
Screenshot 2025-01-15 at 20.18.17.png

I don't like all of those new feature reminders too, even if I only see them one time. But for people who are not informed like everyone here it might be useful.

What's really bad is a red 1 somewhere and you can't get rid of it somehow.
 
  • Like
Reactions: SuzyM70 and cjsuk
Remind me where this is in 2 years.

There are whole areas of just finance which have bigger than Apple sized chunks of money floating around with absolutely no use case or interest in LLMs at all. In fact they see them as a risk due to the unconstrained risks.

There is no outcome here other than failure in the long run. It's not economically, socially or logically viable. Betting your future on it is unwise.

Some good things will be shaken out of the tree along the way but they are mostly stuff in the sidelines which managed to leverage the hype a little. And you probably haven't heard of them.
Remind me where this is in 2 years .. when LLM based conversational interfaces are standard fare on most apps and OSs 😉.
 
I feel a little out of place, chucking in my 2c worth into this conversation. I'm just a draughtsman (and am in the process of becoming an energy efficiency assessor), and a nerd, but surely at some point the AI bubble will burst, like most tech hype cycles, and LLMs and diffusion models will no longer have their next big thing hype, and become just another set of technologies for use in making better tools.

As an assistive technology, spoken chatbots have great potential, but as an everyday interface, I don't want to be walking around like I'm in Star Trek, having to talk to a small screen to make it do what I want! Standing in the queue in the supermarket: Computer! Read that message from my boss!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.