Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

AI should be a feature you can turn only when you need it

  • Yes

    Votes: 88 80.7%
  • No

    Votes: 13 11.9%
  • I have not decided yet

    Votes: 12 11.0%

  • Total voters
    109
Maybe it could be helpful for people who have to deal with hundreds of emails every day.
Yep. Nigerian princes will use AI to crank out ever-more convincing scam emails/posts/tweets and the rest of us will need Defensive AI to zap ‘em before we even see them.

We’re headed for an arms race of bad AI vs good AI. A race none of us want, but will pay for nonetheless.

I think this is how many will be persuaded to pay a monthly subscription for AI. Same as burglar alarm monitoring, VPN’s, pop up blockers, password vaults, credit monitoring, and, um, secure parking even. Protection basically.
 
Last edited:
Ended up disabling it. I know it's in beta, and the feature set isn't 100% here yet, but I can already tell I don't want to use it. I'd rather see my full messages and emails appear in my notifications instead of wasting the processing power to summarize them for me. I'll never use that emoji maker, or that photo thing to make my mom into a cartoon.

I'm more worried about my children using it for writing essays and other school work. My first is coming in a few months. I can only hope by the time she's in school, the AI built into her shoe won’t be doing all of her thinking and judgement calls for her.
 
  • Like
Reactions: cool11
I'm more worried about my children using it for writing essays and other school work. My first is coming in a few months. I can only hope by the time she's in school, the AI built into her shoe won’t be doing all of her thinking and judgement calls for her.

They have to prompt it and read the output. That can't be avoided. In doing so they are learning the subject anyway.

Exams will need to redrafted to show more practical understanding rather than just writing essays. Anyone can write an essay and cheating at essay writing is something that has always existed. Showing understanding in a practical test with no access to the internet is more important. Such tests already exist but can be strengthened.

Universities shouldn't be a thing anymore anyway. The internet was supposed to abolish this outdated elitist nonsense and make education cheap and easy to access for everyone. It has gone in the other direction with universities becoming more expensive, causing more student debt, foreign students have their courses paid by crime gangs to open up bank accounts in their name.

Universities are all a dirty business creating some of the worst politicians and upper class criminals.
 
They have to prompt it and read the output. That can't be avoided. In doing so they are learning the subject anyway.

“Learn” here, if we’re honest, is a loaded word.

The “learn” of an LLM is not at a sentient level of original thinking endemic to, well, analogue intelligence.

Exams will need to redrafted to show more practical understanding rather than just writing essays. Anyone can write an essay and cheating at essay writing is something that has always existed. Showing understanding in a practical test with no access to the internet is more important. Such tests already exist but can be strengthened.

Do you really believe a “practical test” will stunt an LLM?

Moreover, how does one draft or craft a “practical test” for programmes and departments which exist outside the realm of STEM?

If the counterargument to this is to strip universities of their non-STEM departments, then expect witnessing a vicious, long, protracted fight from multiple, but fairly unified angles by scholars. (As it is, no LLM comes anywhere to the level of being a scholar.)

On the contrary: by limiting LLM/AI learning to STEM, it can improve research aggregation, processing, and analysis for those areas.

Also, original scholarly research — scholarship — absolutely depends on demonstrating competencies beyond multiple choice, true/false, or fill-in-the-blank responses. There is absolutely a necessity for essay writing — not least because it’s a higher-level human skill, but it’s also incredibly naïve to presume essay writing is a simple, menial-level skill aped well by LLMs.

Cream rises to the top. Even as they’ve been shown to crank out a lot of words to read as if some living person wrote them, LLMs aren’t exactly revered for pumping out the cream or renown for being, uh, talented.

Students who rely on LLMs to write their papers may think they’re clever, but it’s substance, not filling, which differentiates abstract-level, original thinking for which AI is still at a level of infancy — limited to original, effective “thinking” in playing logic-based games like Go or chess, where the entire process is a literal, either/or, binary decision-making, projected out by tens, even hundreds of possible future turns by an opponent. That isn’t abstract thinking. It’s still locked to a binary vantage.

Moreover, this argument against the skill of writing essays fails once one considers how a routine means of examination at end of a university course’s term demands students to write long-answer/short essay responses to major exam questions in rooms where no electronic devices are permitted.

Universities shouldn't be a thing anymore anyway.

Whew… is it warm to anyone else in here, cos that was a hot take.

Since tertiary schools are 86’d, I guess the next to be canned are secondary schools, and eventually primary schools. “For their own good, we can’t be teaching these meatsacks dem learnin’s, cos it’s useless to them and also probably dangerous.”

Oh my.

The internet was supposed to abolish this outdated elitist nonsense

No, it wasn’t.

The internet was engineered to transfer information orders more quickly than by much slower means — like interlibrary loans, postal delivery, analogue electronic telephony/telegraphy, and other analogue, distance-hindered conveyances of delivering that mostly-analogue knowledge/data/information.

Once much of that analogue information was slowly but steadily digitzed, it could begin to be conveyed much more quickly than ever before, and without as much reliance on intermediary, physical means of media transfer (like the printed page or the vibration/magentic/optical element of media for something like recorded music).

and make education cheap and easy to access for everyone.

IN WHAT UNIVERSE? 🤦‍♀️

Would this have been the universe where capitalism never took off?

It has gone in the other direction with universities becoming more expensive, causing more student debt, foreign students have their courses paid by crime gangs to open up bank accounts in their name.

:slow clap: The conspiratorial conjecture here is, at best, entertaining, but wow. Just… wow.

If this was a serious argument being made, then it would deserve a serious, long, broken-down response in kind. But my word… it isn’t. Generously put, its reasoning is hayseed-parochial.

What causes more student debt and why universities have grown absurdly costly (in the U.S. principally, not necessarily so elsewhere) is the long, end-product of privatizing profit and socializing loss in post-1980, neoliberal economic policy tied with the disproved myth that a) trickle-down economies work; b) rising tides lift all boats; c) breaking up labour unions is good for prosperity; d) monetizing and commoditizing institutions which traditionally never existed within the financial/FIRE sector is a sustainable, noble initiative; and e) cutting funding for public institutions of higher learning is healthy for a society centred on personal wealth acquisition.

Prior to 1980, even American tuitions, accounting for inflation, were a fraction of what they became. Universities were doing as they’d been doing since the earliest ones, founded by monks and theologans within religious structures prior to the 1600s, had been doing: embracing the higher human motivation to better understand everything. The motivation wasn’t about money or profit. It was about enlightenment. It is, literally, where the “Age of Enlightenment” gets the name.


Universities are all a dirty business creating some of the worst politicians and upper class criminals.

With all due respect, universities are not some monolith. Careful with that spray-painting.

A business school and, by contrast, a history or environmental sciences department may exist within the same university system, such as a large public university (like the U of Michigan), but only one of these churns out the players to seek to do well for themselves and their shareholders, to the expense of everything and everyone else.

A political science department and, by contrast, an English or literature department tends to co-exist as discrete entities within the same university system, and that’s typically the end of it, as intra-institutional siloing of knowledge is still extremely commonplace within academia.

Likewise, folks who are accepted into an undergraduate school, but lack financial resources to begin and complete study, end up co-existing on the same campus with folks who come from money and/or have parents to pay for their full ride. The latter may have family who created named endowments for that school (in effect, making them nepo babies with all-but-automatic admission). The former are, often, on campus and doing service-based labour, like foodservice and library pages, to pay for their tuition and on-campus housing/food plan, servicing students in the latter group.

Last word:

When making these fatuous takes, maybe next time, try to fine-tune it — and use cited, verifiable examples.

It may be beyond a cynic’s vantage, but there are still people who want to continue to learn, to research, to teach, and to write after they finish high school. It’s not a conspiracy to want to pursue learning instead of pursuing political power or material wealth, nor is it illogical. They are not the people engaged in the dirty business you describe, nor should they be tarred by that brush. They seek enlightenment and understanding as probably the highest expression which humanity can ever reach.

Tar your feathers on those who treat university as a means to the end of gaining power and wealth, not on everyone else. Thanks.
 
Last edited:
It may not be a bubble that bursts, but something that just dies out slowly (or fast) for lack of interest. Everyone became afraid of missing out. Lots of time and money lost. But what's really got to me, is a dislike of AI and all things associated with it, as it cost me money. I recently bought a laptop with Copilot+ which promised me a lot of things and delivered zilch. I'm not happy about the money I lost on the deal. It sort of sums up the whole mess.
I can certainly understand the let down of overhype with CoPilot AI. I turned it off and removed it from all of my Windows 11 machines, from the registry. But I work in IT so I’m a bit weary about the nature of what’s called “AI”. I do sympathize with you about the underwhelming state of the product that your purchased, no doubt CoPilot was supposed to be part of the value propositions there.

But even if AI goes out with a fizzle, I think the end result will be the same. We can finally see businesses backpedal their half baked AI chat bots that they rolled out because the iron was hot. Most of these businesses have half baked products already, so why adding in an untrained chatbot was going to help, I don’t know. All that I know is that UI space is at a premium on iPhone, so why would Amazon waste so much precious app real estate shoehorning in the interface for Rufus? It’s a waste that serves no one and worse, no purpose.

On the topic of AI hatred, I suppose it depends on what you mean? I don’t hate “AI” so much, personally. But I do hate most chatbot iterations. I also hate that the tech is shoved into everything without consent or an off button. Amazon’s Rufus is just one example. It’s not good. It doesn’t work well and I know how to comparatively shop without a superfluous piece of technology. If others like it, fine. But where is the off button? It took a few more steps, but I was finally able to remove copilot and copilot search in windows. But there should have been an obvious, off button. It seems that MacOS will make “AI” add on features, opt-in. They’re the first company that I’ve seen, where the AI product is off by default.

This should be obvious. The second, but still acceptable option is for the AI tool turned on with an obvious off button. The biggest No, No, goes to companies like Snap. You have to pay to turn off their AI snoopware. It’s always listening and I wouldn’t be naive to think that these things aren’t pulling telemetry all the time. I don’t know if that’s what you mean by AI hatred. But I certainly don’t want so called AI technology forced upon me if the best purpose it serves is to spy on me. These tools certainly aren’t benefiting me as advertised.
 
Right now, llms are good for coding, as long as you set your expectations correctly and know how to coax it into good results. It's somewhat good for language learning but they're all too polite to relentlessly correct me. As for writing, I can't let it write anything for me because it doesn't sound like me at all. If I were a CEO or manager who cranks out corporate speak memos all day long, it's probably great.

It's fine for boilerplate documenting of code, which I'm too lazy to do and tend to be inconsistent with format etc
 
  • Like
Reactions: zevrix
A lot of the arguments here are also similar to:

  • Why would I buy an automatic? it costs more and its better to just row your own gears! I pay better attention to the road and am less distracted.
  • Why would I want Photoshop/Illustrator/Wacom? They make are too "easy". It's better to just draw my own art then scan it in when I'm done.
  • Why would I write something on a computer and email it? It's less personal I'd rather write it by hand and mail it.
  • Why would I want to Auto balance the colors on my photo and apply filters? It's better if I just spend 20-30min per photo and do it manually.
  • Why would I ever want a digital camera? Film is massively better.

Every time technology evolves in some ways to make it easier for non-experts to do something the "experts" complain that's it just "not the same" or the old way was somehow "better" fully ignoring the new freedom and abilities it enables the non-expects. Mostly because the small simple stuff the "experts" saw as their bread and butter are no longer out of the capabilities of the non-expert. Are the results 100% the same? probably not, but being faster, cheaper and 70-80% as good is more than good enough for most people.

My dad is dyslexic and can barely spell. He uses text to speech and then uses AI to improve his emails because he likes that it makes him sound and look like is not on the literacy level of a 5th grader.
 
  • Like
Reactions: Stefdar
A lot of the arguments here are also similar to…

Without specific, quote-citations, this doesn’t really go anywhere.

And with all due respect to your father and for bringing him into your argument, you’d do him more proud by going into specifics already discussed here which posit, as you broadly stroked it, “The experts complain the hoi-polloi can now do things as good as they can,” when discussing aspirational uses of LLMs as surefire ways to do everything from, idk, “Write my long-answer responses during finals,” to “drive me to work on the most chaotic stretch of roads during a bad storm,” to “do a photography or film for me which makes the use of digital and film cameras moot,” to “do an original research, unprompted, publish it with peer review (other LLMs?), and once we do that, let’s phase out tertiary, secondary and, eventually, primary schooling and education for our creator, the meatsacks.”

Or something.

It’s not untoward to argue how tertiary education fulfils a basic human need for a better understanding, for those who want to have that better understanding. No amount of leaning on LLMs can substitute that purpose, pursuit, or function, and no amount of relying on an LLM, for someone who dropped out of higher learning part-way through secondary school, can fill that gap. That gap involves a lot of work, time, and original, non-binary, abstract thinking, reasoning, and problem-solving — with only the latter being a place where strict rules can be applied, where something like an LLM can be a useful tool, particularly around quantitative analysis.

And to get this back on point: having no opt-in for cloud-based LLM processing, for an OS on one’s own device, such as running Sequoia on a Mac, is not OK. As with any third-party tool handled remotely, this should be an opt-in and elective feature, not an opt-out. Even Siri assistant has been an opt-in component of macOS.
 
Last edited:
  • Like
Reactions: houser
AI is a no-go for me!

I can't use Monterey to type professionally since that OS changes words automatically
which impedes my creative objective.

I can type more words about how dangerous AI can be
but this new "selling tool phrase" is really not AI since we humans are behind the keyboard.

when is the term "AI-Pro" being released by , next year?
 
AI is a no-go for me!

I can't use Monterey to type professionally since that OS changes words automatically
which impedes my creative objective.

You set these parameters on there, yes? (They’re the same as what was found from at least as far back as Mojave and High Sierra.)

Shutting off auto-correct is one of the very first things I configure when setting up a new OS on a system. I would love to see this be an opt-in asked of users when they go through the steps of being asked whether they wish to use Siri or have analytics of use sent over to the Apple.
 
You set these parameters on there, yes? (They’re the same as what was found from at least as far back as Mojave and High Sierra.)

Shutting off auto-correct is one of the very first things I configure when setting up a new OS on a system. I would love to see this be an opt-in asked of users when they go through the steps of being asked whether they wish to use Siri or have analytics of use sent over to the App
Thanks!
I think one my MacBooks has the parameter set for auto spell so I don't "typo" on this forum could be the 2010 MBair.

as fas as the latest OS I know I have the "hey s, finish typing my sentence!" turned off,
since that is too much AI for anyone!

Even with Monterey there are several things and functions were automated, but turned off.
Sonoma and ventura were too plastic and robotic for me as I wont even look at Sequoia.

siri is a no-go on everything the next and last step it to turn off the autoplay on the HomePod mini
when I pick that up to move insides with unplugging then im AI free!

and you?
 
So far, I find that AI-generated text content is usually awful. It delivers a poor version of what I can find in a Google search, and it often misses importand details and context.
 
  • Like
Reactions: AAPLGeek and Arran
Artificial intelligence cannot think because it has no soul; there, I’ve said it.

Artificial General Intelligence is analogous to the way people think. Original thinking. If I’m wrong and AGI could exist, it ought to be resisted and regulated at all costs; I’ve seen far too many Science Fiction stories where the only possible outcome is horrible to humans. Just think what would happen to your soul if you were fixed in one place, could think and couldn’t act directly on your environment - you’d go crazy. :confused:


Artificial intelligence (LLM) is only blind algorithms which use as its input massive, massive amounts of data and simply outputs what statistically “ought“ to be the next word. There’s no thought there.

As far as I’m concerned, people need to be able to be given the latitude to think and thinking is hard. The hard problem is going to be setting up protocols where LLM output is watermarked permanently and programmatically able to be distinguished from original thought. The Internet wasn’t designed to be secure and verifiable and that means it’s easy to fake anything.

Artificial intelligence (machine learning) is more useful because it is far more tuned to a particular task. Think of extracting the piano track from a cassette tape: the surviving members of the Beatles wanted to use the last song John Lennon sang before he was assassinated. It took 22 years for machine learning to get to the point where that could be done. And when it was able to be done, it was perfect in it’s output. And that’s a useful thing, and potentially more able to be private.
 
Last edited:
  • Like
Reactions: cool11
IN WHAT UNIVERSE? 🤦‍♀️

Would this have been the universe where capitalism never took off?



Prior to 1980, even American tuitions,


Calm down...chill


Last word:

When making these fatuous takes, maybe next time, try to fine-tune it — and use cited, verifiable examples.

I posted my opinion. Opinions don't need citations, just like yours didn't.
 
Yep. Nigerian princes will use AI to crank out ever-more convincing scam emails/posts/tweets and the rest of us will need Defensive AI to zap ‘em before we even see them.

Copy and paste templates is far more efficient and that's how nearly all spam and scams work. AI is slower and more costly.

We’re headed for an arms race of bad AI vs good AI. A race none of us want, but will pay for nonetheless.

Nah. Spam and online misinfo are as old as the web and it is faster and cheaper when it comes from the mouths and fingers of people and their dear leaders.
 
Why would you allow the slow degeneration of your brain and thinking skills by giving AI the power to articulate your thoughts?

AI will create a generation of people with dementia.

Cars haven't caused people to unlearn how to walk and made them fat and lazy. They are just a tool to allow you to travel larger distances or carry more stuff. AI won't cause people's brain to degenerate or cause dementia. It's a tool that allows them to achieve more things.

Steve Jobs once said "the computer is a bicycle for the brain". And AI is a car for the brain.
 
  • Like
Reactions: cool11
Cars haven't caused people to unlearn how to walk and made them fat and lazy.

Some peer-reviewed fact checks:

Driving: A Road to Unhealthy Lifestyles and Poor Health Outcomes
Drivers to Obesity — A Study of the Association Between Time Spent Commuting Daily and Obesity in the Nepean Blue Mountains Area
Drivers Toward Obesity: A Systematized Literature Review on the Association Between Motor Vehicle Travel Time and Distances and Weight Gain in Adults” [free access link]

And there are scores more. But all point to another elephant in the room: increased physical inactivity (such as being behind the wheel on long commutes or total reliance on motor vehicles for all travel movement) contributes negatively to physical and mental well-being outcomes.

And although one could continue kicking this point long after it’s down on the ground and bloodied (specifically, on topics like the public safety and public health risk factors from drivers who operate motor vehicles on non-highway streets where the presence of others also using those spaces is an unavoidable given and putting the latter at greater mortal risk for harm); or after it’s dead (use of screen-based devices and consoles contributing to higher casualty outcomes and morality rates for people and other life outside an implicated vehicle); or after it’s buried (pollutants from driving, including micro-particles from airborne vulcanized rubber and brake pads — i.e., braking and accelerating — share a positive relationship between public respiratory health decline and proximity to high-traffic corridors, such as major arterials, before factoring internal-combustion engine exhaust particulates), I think the above is enough of a drubbing for now. :)


They are just a tool to allow you to travel larger distances or carry more stuff.

“Supersize me, daddy!”


AI won't cause people's brain to degenerate or cause dementia. It's a tool that allows them to achieve more things.

[Citations needed.]

Also, please specify between AGI and LLMs here. Cheers.

Steve Jobs once said "the computer is a bicycle for the brain". And AI is a car for the brain.

Oh dear… we got ourselves a live one here. 😆
 
Cars haven't caused people to unlearn how to walk and made them fat and lazy. They are just a tool to allow you to travel larger distances or carry more stuff. AI won't cause people's brain to degenerate or cause dementia. It's a tool that allows them to achieve more things.

Steve Jobs once said "the computer is a bicycle for the brain". And AI is a car for the brain.
I think if people use AI to help them research and discover information faster then that’s a good thing. As long as they verify the AI isn’t telling lies before using that info. That still takes effort.

On the other hand, if folks are lazily cutting and pasting from AI to communicate with others, then who is really speaking? Not the originator. They may as well just send a link to something somebody else said on the web.

Yes, that would be a bit like the old days of receiving emails from your “friends” starting, “FW: FW: FW: FW: FW: FW: FW: FW: FW: You wont believe this, LOL!!!!!“. At least those were self-labeled as trash.
 
  • Like
Reactions: Bungaree.Chubbins
AI is a no-go for me!

I can't use Monterey to type professionally since that OS changes words automatically
which impedes my creative objective.

Nothing is impeding your creative objective.

And some people, including the disabled, really need assistance. Remember this guy...

iu
 
You should stop harassing people on here. Your rebuttals are dripping with sarcasm and your own opinion, not facts.

I should be able to oblige once selected folks arrive to this discussion with better prepared, materially-backed case arguments, and when those arguments are made, that they’ve research data backing and they come prepared with those peer-reviewed references as they support their argument.

Mouthing off silliness like “AI is a car for the brain” is founded in nothing but magical thinking by various and sundry minds (which aren’t using theirs anywhere nearest to their fullest potential).

As for any recriminations of sarcasm, I guess I should be gentler toward people who blurt out baby-bathwater things like “universities are all a dirty business”. But nah.


Putting it another way: I could ask you, specifically, to stop being so insulting toward the case arguments, with references, I’ve brought to this discussion (e.g., “Calm down… chill”) or utilizing dead scholars as a rhetorical prop, but I feel that would amount to a fool’s errand on my behalf.
 
Last edited:
It may not be a bubble that bursts, but something that just dies out slowly (or fast) for lack of interest. Everyone became afraid of missing out. Lots of time and money lost. But what's really got to me, is a dislike of AI and all things associated with it, as it cost me money. I recently bought a laptop with Copilot+ which promised me a lot of things and delivered zilch. I'm not happy about the money I lost on the deal. It sort of sums up the whole mess.
And to be clear I think LLM Machine Learning tools will persist in areas that it excels at, and which someone mentioned in a prior reply.
Patterns in static (think SETI), traffic patterns, weather patterns, and complex engineering challenges. Things where a human mind can use a hand to find intricate or subtle patterns in chaotic systems.

I am sorry about the Copilot+ PC. Microsoft really can't integrate Copilot deeply into Windows in a way that can change system settings or do anything too integrated with the system, because then Copilot would be a vector from which your PC can be compromised. Therefore it's hard to get these new PC to do much now. They have the neural processor (engine/core, whatever), and allegedly that will somewhat future-proof your PC for all the wonderful so-called "AI" features that are coming down the pike.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.