Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
How much Kool Aid has everyone drunk on AI, especially the Marketing people and the Executives who sign off on this crap? This is a sales pitch to the most lazy and incompetent people in the workplace.

"Hain't we got all the fools in town on our side? and ain't that a big enough majority in any town?"
-Mark Twain, The Adventures of Huckleberry Finn

Know your market!
 
  • Like
Reactions: Lioness~
Apple harvests a lot of information from its users. While not specifically mentioned; Microsoft, Google, and Facebook also harvest a lot of information from their users and those three companies do not sell the data to others, they use the data to sell products and/or services. Apple is no different.

Apple is certainly different. They're bad at selling targeted ads, especially here in Norway.

It's not about what they collect, but how they use it.
 
Well, that's all very well, but apparently, my Intel Mac Pro with 96GB of RAM can't cope with it. And yet, the same computer does the same thing without breaking a sweat if I just ask in Spark or Grammarly. What gives, Apple?
Probably because Spark and Grammarly are doing everything in the cloud? Whereas Apple Intelligence does pretty much everything directly on the device.
 
Well, that's all very well, but apparently, my Intel Mac Pro with 96GB of RAM can't cope with it. And yet, the same computer does the same thing without breaking a sweat if I just ask in Spark or Grammarly. What gives, Apple?
Our Intel Mac Pro is still going strong. I think it’s great that Apple keep releasing new MacOSs for it after all these years. Sure it doesn’t do everything that the M series chips do, for example no iPhone on the Intel Mac, or the AI stuff that Apple has created to run on M series chips. But as you’ve said, Spark & Grammarly work well on it, so for me it’s not a huge loss. Apple is writing MacOS primarily for M series chips, not intel. Apple is moving on. The Intel Macs still do everything they did when we bought them, and more. Why be upset that our Intel Macs are no longer the #1 priority for Apple? We’ve had our Intel Mac Pro for 7 years now. I expect we’ll get another 3 years and then upgrade to a new machine. 10 years for 1 computer is pretty good and what we expected... This may be the last OS for our computers, what a great run! If you envy the M series chips for their new features, maybe you’ll look forward to a new Mac when the time is right. 😉
 
  • Like
Reactions: AppaSquatic
This will be the philosophical debate going forward, because it’s the result that matters. In the end, will it matter how Employee A got there as opposed to Employee B?
This is an oversimplification, of course, but if you think of AI as a tool, why shouldn’t one use it?

What's most important to you as a hypothetical manager? A worker's output, or the amount of time they spent on it? As long as the quality is comparable, I'd favour the worker who can provide me with the report I need in 30 minutes by harnessing AI and leave work at 5:00 over the worker who fails to use the tools available to them but works heroically late taking 2 hours pulling together that same report.

Can you imagine having the same conversation in the 1980s or 90s, considering somebody as a slacker because they'd used a spreadsheet on a computer to achieve something in a fraction of the time that other workers were still doing manually?


Yes, it matters how they got there, because the point of doing anything is that the people actually understand what they're doing. People who don't understand anything and copy/paste out of a tool that did all the work for them are useless in a crisis where actual thought is necessary, and cannot provide any useful insight into understanding and improving processes.
 
Enabling professional incompetence isn’t all that humorous, but ok.
I guess this a better demonstration than an ad featuring school kids plagiarizing their homework courtesy of AI. Can’t wait to see THAT holiday commercial.
 
  • Like
Reactions: McWetty
This will be the philosophical debate going forward, because it’s the result that matters. In the end, will it matter how Employee A got there as opposed to Employee B?
This is an oversimplification, of course, but if you think of AI as a tool, why shouldn’t one use it?
Count me in on the debate. The way I see it is… what are you paying the person for?

Is it their speed at typing? Or is it their charisma to close a sale? To wordsmith an ad campaign? Or spin bad numbers on a presentation? Maybe it’s just to proofread a rough draft? Some are skills and some use tools.

When we allow a computer to replace the human, do we need the human anymore? That’s the debate. The endgame scenario may be 1 person leading a company and using said AI tools to do the work that the other humans did before being replaced.

I don’t think it will get that far, but the potential is there (especially small business). It just depends on how much humanity the business requires to do its business.
 
I'm not digging some of the recent ads. The core messages seem to be problem solving- which can certainly be good- but the problems being solved are basically people putting one over on other people... like the (British?) girl forgetting the name of the guy... or the same girl not having done the work of reviewing the marketing message... or the wife forgetting the husband's birthday...


I would think there are many, MANY ways to show A.I. more positively than basically being tools for forgetful slackers... even if things like "forgetting" is a very real thing that can happen to any of us at any time. But hey, they are a $4T company and I'm just a lone consumer.

For example, what if the wife- like the daughters- remembered her husbands birthday and used A.I. to make the very same slideshow for him ahead of a last minute scramble? I would think that would "hit" just as well without it coming off like she basically put one over on him (and indirectly THEM- her own family). Is she "ge-ge-ge... genius"? Her "gift" to him is not even on HIS phone. Presumably, as soon as he and his daughters are done watching it, she takes her phone back. At least the daughter's gift is his to keep for more than a few minutes.

If the goal is to show how much smarter people can be by using A.I. than not, it could be demonstrated just as easily in positive messages vs. this "fool somebody" theme.
And why was mocking her daughter's "I had a great teacher" response necessary? I really don't like the humor where somebody has to be mean to someone else.
 
Well the answer is that 50% of people are below average. To give someone below average a tool with a 75% success rate turns a below average person into someone in the 37th percentile, which is a crap person.

Lets split between LLMs and other ML technologies here, the latter of which have applications but are conflated under the AI banner. LLMs are not in their infancy - they are at an asymptote of progress and investment interest is flatlining as is the cost of training and running the models. The results are less than promising if you're an actual professional in an area that they are being applied to. If you're not, it looks like magic. It's definitely not magic. It's stochastic fraud.

As for social media, this is a damage amplifier for what goes on in there. Check out the recent Dublin Halloween parade hoax.
You quote statistical jargon but may have missed a class or two. Statistically under the Bell Curve, approximately 68% of people fall under the average intelligence classification (+-1 standard deviation from the mean aka high or low average barriers, 85-115 IQ) but that leaves 32% statistically above or below average intelligence, meaning that only 16% of the general population are actually below average intelligence. 37th percentile is in the average range. This was not written with Apple Intelligence 😆
 
Last edited:
  • Like
Reactions: SFjohn
That's actually how I use Chat GPT. You can spew all your thoughts and feelings into an email and it'll translate it into a coherent, professional message. And people will reply to it as if a human wrote it. It's like magic.
Its like … tragic.
 
Well, that's all very well, but apparently, my Intel Mac Pro with 96GB of RAM can't cope with it. And yet, the same computer does the same thing without breaking a sweat if I just ask in Spark or Grammarly. What gives, Apple?

"You're still running Intel?"
"Ohhhh my gawwwwwd!"

... in a dumb blonde bimbo tone. 🤣


So... just use Spark or Grammarly? That's the beauty of freedom. 😇
 
  • Like
Reactions: SFjohn
Apple is choosing not to let this run on intel systems, but it would work fine.

You know it, I know, the American People know it.®️ 🇺🇸

(getting my political kicks in today since I disconnect from the internet and media on election day. 😛)

How do you know it would "work fine"? The new ML-based technology leverages the Neural Engine hardware directly. It's about the future. Why would Apple invest engineering resources into now-replaced hardware?

There are plenty of alternatives to Apple Intelligence that you can use. If you want Apple Intelligence, you need to buy the hardware that it's built for.
 
  • Like
Reactions: SFjohn
You quote statistical jargon but may have missed a class or two. Statistically under the Bell Curve, approximately 68% of people fall under the average intelligence classification (+-1 standard deviation from the mean aka high or low average barriers, 85-115 IQ) but that leaves 32% statistically above or below average intelligence, meaning that only 16% of the general population are actually below average intelligence. 37th percentile is in the average range. This was not written with Apple Intelligence 😆

If you were in my classes you'd have picked up that there was no quantitative measure specified and got some marks and I'd be less disappointed with you for making assumptions about data. I suggest you go read Spielgelhalter's book...

The defacto "average" is the mean of the distribution. A sample can be below the mean thus is below average. A standard deviation either side is irrelevant to the model until it is well-defined. Anyway my point was not really an academic one but if you use less effective than unity tools, assuming that is measurable (which it is) and you are average then you're doing yourself a disservice by likely producing worse crap even faster than you usually do.
 
Last edited:
Yes, it matters how they got there, because the point of doing anything is that the people actually understand what they're doing. People who don't understand anything and copy/paste out of a tool that did all the work for them are useless in a crisis where actual thought is necessary, and cannot provide any useful insight into understanding and improving processes.
I think that's a bit of a hyperbolic example though. We're not talking about intricate skills here, we're talking about common, basic tasks being done more efficiently (and sometimes better). If I have 100 numbers to add up, I'm going to do it in a couple of minutes on a calculator or a computer rather than spend much longer doing it manually with more scope for error. That doesn't mean that I'm going to lose my ability to do it manually in a crisis. Likewise, if I have a task that I can do quicker and potentially better using AI, I'm going to use AI.

I recently wrote a long report and needed to finish it off by providing an executive summary, something that a year ago might have taken me an hour or so. Using Copilot, I generated one in seconds, then spent about 5 minutes checking it and making a couple of tweaks and ended up with a product that was arguably better than what I would have written from scratch, especially as I was having one of those 'thick head days'. There's lots of times I've been struggling to word something so have put my clumsy effort into ChatGPT and have thought 'yes, that's exactly what I was trying to say' at the output. None of those things erode my ability to write something myself should the need arise. Indeed, I'd argue that seeing how AI can re-parse what I've written can be a good learning opportunity.

It's also a great enabler. I manage a team of people across EMEA and despite English being the lingua franca of the company I work for, language skills vary between individuals and sometimes language weaknesses detract from the quality of what they produce. However, Copilot now means that work can be turned into good quality business English, which they have the language skills to review for accuracy even if they were unable to write it themselves in the first place. Plus, as several have mentioned, reviewing what AI produces from their input actually helps their language development. Even in a group of primary English speakers, you may have somebody who has great ideas but lacks the skills to communicate these well, but with AI that's something that can be overcome.
 
I think that's a bit of a hyperbolic example though. We're not talking about intricate skills here, we're talking about common, basic tasks being done more efficiently (and sometimes better). If I have 100 numbers to add up, I'm going to do it in a couple of minutes on a calculator or a computer rather than spend much longer doing it manually with more scope for error. That doesn't mean that I'm going to lose my ability to do it manually in a crisis. Likewise, if I have a task that I can do quicker and potentially better using AI, I'm going to use AI.

I recently wrote a long report and needed to finish it off by providing an executive summary, something that a year ago might have taken me an hour or so. Using Copilot, I generated one in seconds, then spent about 5 minutes checking it and making a couple of tweaks and ended up with a product that was arguably better than what I would have written from scratch, especially as I was having one of those 'thick head days'. There's lots of times I've been struggling to word something so have put my clumsy effort into ChatGPT and have thought 'yes, that's exactly what I was trying to say' at the output. None of those things erode my ability to write something myself should the need arise. Indeed, I'd argue that seeing how AI can re-parse what I've written can be a good learning opportunity.

It's also a great enabler. I manage a team of people across EMEA and despite English being the lingua franca of the company I work for, language skills vary between individuals and sometimes language weaknesses detract from the quality of what they produce. However, Copilot now means that work can be turned into good quality business English, which they have the language skills to review for accuracy even if they were unable to write it themselves in the first place. Plus, as several have mentioned, reviewing what AI produces from their input actually helps their language development. Even in a group of primary English speakers, you may have somebody who has great ideas but lacks the skills to communicate these well, but with AI that's something that can be overcome.

This both outlines and misses the principal risk. Which is that you need to know what you are doing and critically review the output. This really has limited utility when it is being sold as a way to outsource human labour to avoid paying for it. It just changes things. Not necessarily for the best.
 
Our Intel Mac Pro is still going strong. I think it’s great that Apple keep releasing new MacOSs for it after all these years. Sure it doesn’t do everything that the M series chips do, for example no iPhone on the Intel Mac, or the AI stuff that Apple has created to run on M series chips. But as you’ve said, Spark & Grammarly work well on it, so for me it’s not a huge loss. Apple is writing MacOS primarily for M series chips, not intel. Apple is moving on. The Intel Macs still do everything they did when we bought them, and more. Why be upset that our Intel Macs are no longer the #1 priority for Apple? We’ve had our Intel Mac Pro for 7 years now. I expect we’ll get another 3 years and then upgrade to a new machine. 10 years for 1 computer is pretty good and what we expected... This may be the last OS for our computers, what a great run! If you envy the M series chips for their new features, maybe you’ll look forward to a new Mac when the time is right. 😉
Well, this one I'm grumbling about was born in 2019... it does actually connect with iphone mirroring. Amusingly I also have a Mac Pro 3,1 from 2008 which is still going strong!
 
  • Like
Reactions: SFjohn
I think that's a bit of a hyperbolic example though. We're not talking about intricate skills here, we're talking about common, basic tasks being done more efficiently (and sometimes better). If I have 100 numbers to add up, I'm going to do it in a couple of minutes on a calculator or a computer rather than spend much longer doing it manually with more scope for error. That doesn't mean that I'm going to lose my ability to do it manually in a crisis. Likewise, if I have a task that I can do quicker and potentially better using AI, I'm going to use AI.

I recently wrote a long report and needed to finish it off by providing an executive summary, something that a year ago might have taken me an hour or so. Using Copilot, I generated one in seconds, then spent about 5 minutes checking it and making a couple of tweaks and ended up with a product that was arguably better than what I would have written from scratch, especially as I was having one of those 'thick head days'. There's lots of times I've been struggling to word something so have put my clumsy effort into ChatGPT and have thought 'yes, that's exactly what I was trying to say' at the output. None of those things erode my ability to write something myself should the need arise. Indeed, I'd argue that seeing how AI can re-parse what I've written can be a good learning opportunity.

It's also a great enabler. I manage a team of people across EMEA and despite English being the lingua franca of the company I work for, language skills vary between individuals and sometimes language weaknesses detract from the quality of what they produce. However, Copilot now means that work can be turned into good quality business English, which they have the language skills to review for accuracy even if they were unable to write it themselves in the first place. Plus, as several have mentioned, reviewing what AI produces from their input actually helps their language development. Even in a group of primary English speakers, you may have somebody who has great ideas but lacks the skills to communicate these well, but with AI that's something that can be overcome.

AI can't do math.

The struggle of wording things correctly is a struggle because that's part of the learning process, which is literally reconfiguring the neurons in your brain to make you better for the next time. Avoiding the struggle means you're avoiding the learning process. The same goes for developing language skills.
 
  • Like
Reactions: cjsuk
AI can't do math.

The struggle of wording things correctly is a struggle because that's part of the learning process, which is literally reconfiguring the neurons in your brain to make you better for the next time. Avoiding the struggle means you're avoiding the learning process. The same goes for developing language skills.
LLM AIs are not particularly good at math. Other kinds of AI, like Apple Math Notes are written specifically for that purpose and can handle it quite well.
 
LLM AIs are not particularly good at math. Other kinds of AI, like Apple Math Notes are written specifically for that purpose and can handle it quite well.

Neither are any good for mathematics. Mathematics has determinism and is rule driven. LLMs are stochastic garbage generators.

Math Notes is terrible as well. It's basically a trick handwriting parser that throws everything into an absolutely dire rule engine and then craps out a numeric answer while trying to mimic your handwriting. Ask it what the square root of 320 is (hint it's not 17.88854382)

We have literally had CAS engines for at least 40 years that are better. Hell even microsoft's equation writing recognition in word is better.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.