Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

AI should be a feature you can turn only when you need it

  • Yes

    Votes: 88 81.5%
  • No

    Votes: 12 11.1%
  • I have not decided yet

    Votes: 12 11.1%

  • Total voters
    108

amaze1499

macrumors 65816
Original poster
Oct 16, 2014
1,148
1,184
Why won’t anybody talk about the severe consequences of integrated AI for heaven’s sake?

Privacy is one thing, but why would you allow the slow degeneration of your brain and thinking skills by giving AI the power to articulate your thoughts?

You have an AI that writes Emails for you so another AI can summarise those emails on the other side? This is insane. What makes you human? Thinking for yourself. And it’s hard. But like push-ups, you need to do the push-ups yourself in order to stay in shape.

AI will create a generation of people with dementia.
 

Mollan

macrumors regular
Jul 29, 2013
117
76
The Netherlands/Italy
My feeling exactly.
AIs writing long articles on websites and AIs summarizing these long articles on the user side.

I'm not saying proofreading or restyling is wrong overall, it can indeed help in some specific context, but these tools are flattening and oversimplifying basic human interactions.

And, differently from the use of a calculator to make more complex operations, these tools won't create better or more advanced humans, but they just make even the dumb human inadvertently more productive.
 

NEPOBABY

Suspended
Jan 10, 2023
697
1,685
You can disable it. I have never had Siri enabled on any Mac. It's only enabled on my HomePod.

Most users aren't going to use it. Most people don't care for generative AI.

The remaining people who do use it will have to read and double check the output anyway. It's just like when you have to fix auto complete errors.

The main thing I did not like about the WWDC presentation is the idea some parent will get an 'AI' to write a children's book for their kids. We have already seen Amazon Kindle spammed with these fake AI written books. Book publishers and agents are very angry at how many badly written AI books are being sent to them. It's making their work harder because they are not interested in AI written books.
 

amaze1499

macrumors 65816
Original poster
Oct 16, 2014
1,148
1,184
You can disable it. I have never had Siri enabled on any Mac. It's only enabled on my HomePod.

Most users aren't going to use it. Most people don't care for generative AI.

The remaining people who do use it will have to read and double check the output anyway. It's just like when you have to fix auto complete errors.

The main thing I did not like about the WWDC presentation is the idea some parent will get an 'AI' to write a children's book for their kids. We have already seen Amazon Kindle spammed with these fake AI written books. Book publishers and agents are very angry at how many badly written AI books are being sent to them. It's making their work harder because they are not interested in AI written books.
From the presentation, it didn’t seem like a system wide built-in feature like this one can be turned off at all?
 

NEPOBABY

Suspended
Jan 10, 2023
697
1,685
From the presentation, it didn’t seem like a system wide built-in feature like this one can be turned off at all?

Disable Siri.

Disable the smart and auto complete features in keyboard settings.

This stuff doesn't exist in the beta at the moment. You'll see it in August.
 

houser

macrumors 6502
Oct 29, 2006
375
491
I hope a short rant is allowed and acceptable here then.

The "AI" (scare quotes as it is not really AI) race on the whole is reckless in the usual Silicon Valley way and is among other things a security nightmare. Google and Microsoft are off the chart reckless with it. Win 11 for example has a "feature" in the "Companion" that screenshots all the time, converts the screenshots to OCR and stores it all in a sql db for use with their "AI" !
Everything, including passwords and private data would be in that db if exploited or leaked.
I am sure many have not missed the fiasco with google ai searches?

Apple is less reckless it seems (mostly clean plagiarism machine) but we will surely hear of potential for new issues and especially as new exploits are bound to use "AI" in generative and adaptive ways to automate new exploits to look for loopholes that we will likely read about fairly soon. As an example, it is not difficult to imagine what happens if an exploit can to some degree access the onboard "AI" code on say Sequoia or iOS18?
Early days, but not having Apple Silicon might actually be a kind of blessing for a few years if no "AI" code can be executed on Intel. I certainly hope that API will be dormant and unusable on Intel?

It is in this context odd how much of the Silicon Valley stuff is basically illegal takes on vulnerable businesses.
From Spotify (Illegal Music company, just steal the music and bypass the musicians) to Uber (Illegal cab company, avoid regulations and worker protection laws) Crypto (Fake money for criminals), most of what tech billionares like the guy who is tanking the bird app is doing (emission rights and taxpayer funding) and lastly this Generative AI (Plagiarism machine/copyright circumventor) etc.

I'll finish with this quote from somewhere on the bird app some time ago (sorry I have found no source to credit but was in or around an interview with AI execs.):
" Trillions of dollars of market capitalization and revenue are being blown on something remarkably mediocre. If you focus on the present - what OpenAl's technology can do today, and will likely do for some time - you see in terrifying clarity that generative Al isn't a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of users."
End of rant.
 
Last edited:

Pandyone

macrumors regular
Sep 30, 2021
220
309
You have an AI that writes Emails for you so another AI can summarise those emails on the other side? This is insane. What makes you human? Thinking for yourself.

It’s functions that will work as tools. AI tools is already out there and is frequently used. What’s needed is the thinking part, analysing and learning from it.
I know of people using AI for coding, and gets the expected result but doesn’t actually understand what’s going on in the code. That could be scary consequences if malicious code is inserted between the lines.
That critical thinking/analysing will probably get lost in a few generations.

I saw a post with something like “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes”, which is something I agree with.
 

NEPOBABY

Suspended
Jan 10, 2023
697
1,685
I saw a post with something like “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes”, which is something I agree with.

We've all seen that post by now and it's illogical.

We already have dish washers and laundry machines. No AI needed.

Yet people who have dish washers and laundry machines aren't saving so much time that they became great artists and writers. They just binge on Netflix and endlessly scroll on IG to see what the latest shoes and dances are.
 

Bustycat

macrumors 65816
Jan 21, 2015
1,247
2,953
New Taipei, Taiwan
aiartificialintelligence_2001_photo_58.jpg

What people thought about AI in 2001

genmoji__fp35gdyh38mu_large_2x.jpg

What people thought about AI in 2024
 
Why won’t anybody talk about the severe consequences of integrated AI for heaven’s sake?

Privacy is one thing, but why would you allow the slow degeneration of your brain and thinking skills by giving AI the power to articulate your thoughts?

You have an AI that writes Emails for you so another AI can summarise those emails on the other side? This is insane. What makes you human? Thinking for yourself. And it’s hard. But like push-ups, you need to do the push-ups yourself in order to stay in shape.

AI will create a generation of people with dementia.

Once a non-sentient AI device/entity is able to take over all cleaning, laundry, and taking out the rubbish, then I’ll find an AI to be a personally useful addition to my life.

(If the AI entity is sentient, then they’re exempt from all of the previous, and they’re welcome to my weekly coffee klatch.)

Tech principals are doing this wrong: don’t produce an AI to take over my higher cognitive activities like socializing, problem solving, or learning.

My feeling exactly.
AIs writing long articles on websites and AIs summarizing these long articles on the user side.

I'm not saying proofreading or restyling is wrong overall, it can indeed help in some specific context, but these tools are flattening and oversimplifying basic human interactions.

The thing about linguistic communication is these are human creations, and the quality of those creations is evident even to a casual reader when they stumble upon a generatively-produced article of text which reads nonsensical.

Like, I get how some folks bemoan, even despise writing or even long-form reading, but it’s really easy to forget how these are, in their linguistic way, forms of doing art and design.

Writing and copy editing specifically are mediums of human expression: they convey a writer’s or editor’s unique nuance of an idea or expression in ways which no AI-generated product can — much as a musician has unique nuances in how they compose and play and how a painter has unique nuances in how they throw colour onto a substrate.
 
Last edited:
  • Like
Reactions: xxFoxtail

NEPOBABY

Suspended
Jan 10, 2023
697
1,685
Tech principals are doing this wrong: don’t produce an AI to take over my higher cognitive activities like socializing, problem solving, or learning.

You're right of course, but people who want to socialize, learn and problem solve the traditional way will continue to do so and if they use an LLM it will only be supplementary like looking up a dictionary or wikipedia.

Then there are the other folks.

In the 90s they called the internet an information superhighway that will connect everyone, make everyone smarter, all the knowledge of the world at your fingertips, bring down the cost of education.

Those predictions largely failed. Ignorance, conspiracy theories and religious extremism has not subdued. Governments have learned how to weaponize tweets to make their followers behave in even more extreme ways.

Education is more costly than ever, driving many into debt.

So there is a space for LLMs. If a politician or religious fanatic wants to post something wild on the internet and an 'AI' fact checks them automatically then those people become a laughing stock.

But a public LLM must absolutely have several features to do that:

1. Reliability.
2. Independence and not being politically correct towards someone's political/religious/economic ideology. It must be able to call out anything that has either failed, will fail, is mythological, and isn't grounded in reality.
3. International accord/universal standards.
4. Very energy efficient inference chips, not the badly inefficient GPUs Nvidia ships that are partly derived from gaming GPUs. They must use designs that are highly specialized.
 
You're right of course, but people who want to socialize, learn and problem solve the traditional way will continue to do so and if they use an LLM it will only be supplementary like looking up a dictionary or wikipedia.

The difference, of course, is dictionaries existed for centuries as a physical tome before it was evolved into a digital application. Wikipedia is an (imperfect) extension on the idea of the encyclopaedia (its technical bridge the likes of the old Encarta) — which, as well, existed for generations before the internet came about.

(And yes, it is handy, mass-wise, to have a dictionary, especially the OED, on one’s device as an application, without a need to carry a book along, and also handy to not have to lug an entire Encyclopaedia Brittanica or World Book along for the ride.)

LLMs, however, enjoy no analogue precedent — unless by precedent, we mean to say human beings.

For things like recreating the Earth’s dynamic features, digitally, in real time, an LLM-based application is indispensable and necessary. For mining long-shuttered troves of data for pattern recognitions which human minds, even the sharpest, are rarely able to pick out, a LLM assigned to this is also indispensable. A good example is identifying all possible proteins RNA could template into synthesis. Also, predicting and tweaking city-wide traffic patterns, to better foresee ways to move people about more efficiently and quickly, no matter the hour, is a place where LLMs can help transportation planners and civil engineers.

But having a LLM “app” on one’s desktop supplants or replaces no tool which already exists, and as such merits being an optional/opt-in component of an OS — not something from which one is unable to opt out.

So there is a space for LLMs. If a politician or religious fanatic wants to post something wild on the internet and an 'AI' fact checks them automatically then those people become a laughing stock.

That bridge is one which LLMs have yet to cross.

There is still an inherent bias problem and a drift problem with LLMs which makes neutral fact-checking a goal, but one which hasn’t arrived to 2024. Until then, we may expect more of these stories.

But a public LLM must absolutely have several features to do that:

1. Reliability.

Welp.

2. Independence and not being politically correct towards someone's political/religious/economic ideology. It must be able to call out anything that has either failed, will fail, is mythological, and isn't grounded in reality.

Unless there’s a way for an LLM to be programmed without even a faintest whiff of the developer or development team’s own human bias, then this is a Utopian idealization with little hope of realization.

3. International accord/universal standards.

Which means taking LLMs away from a proprietary/closed/black box realm to a completely transparent, examinable, and juried realm in compliance with both published standards (e.g., IEEE) and published protocols/deployments (e.g., ISO).

4. Very energy efficient inference chips, not the badly inefficient GPUs Nvidia ships that are partly derived from gaming GPUs. They must use designs that are highly specialized.

OK. This is a technical challenge, unrelated to the three social-oriented points preceding it.



Of course, a public — public — LLM is just that: public, like a public utility (like hydro/electricity infrastructure, water/sewage/reservoir infrastructure, transportation infrastructure, and the physical/logical internet).

What I don’t see right now are any of the leading LLM/AI principals suggesting that’s where LLM development should, ultimately, be headed — not so long as Bay/Wall/Bond Street are permitted any leverage in that discussion.

So again, a compulsory LLM client, one dependent upon proprietary and/or for-profit LLMs, integrated into an OS — when that LLM is still chock-full of the above issues and revenue-adjacent liability — is really inventing and putting the cart before the horse (in ecosystems where the horse hasn’t yet been introduced).

As such, it has no place being integrated into an OS — unless of course the objective of that integration is to amplify the aforementioned problems, only at a mass user scale and to yield more negligent misinformation and conscious disinformation by bad (human) actors.
 
Last edited:
  • Like
Reactions: rehkram

rehkram

macrumors 6502a
May 7, 2018
813
1,145
upstate NY
The remaining people who do use it will have to read and double check the output anyway. It's just like when you have to fix auto complete errors.
That's a darkly hilarious insight. Think of AI's potential for creating next generation auto-complete errors capable of destroying the world, only more efficiently and comprehensively. :)
 
  • Haha
Reactions: B S Magnet

NEPOBABY

Suspended
Jan 10, 2023
697
1,685
The difference, of course, is dictionaries existed for centuries as a physical tome before it was evolved into a digital application. Wikipedia is an (imperfect) extension on the idea of the encyclopaedia (its technical bridge the likes of the old Encarta) — which, as well, existed for generations before the internet came about.

(And yes, it is handy, mass-wise, to have a dictionary, especially the OED, on one’s device as an application, without a need to carry a book along, and also handy to not have to lug an entire Encyclopaedia Brittanica or World Book along for the ride.)

LLMs, however, enjoy no analogue precedent — unless by precedent, we mean to say human beings.

For things like recreating the Earth’s dynamic features, digitally, in real time, an LLM-based application is indispensable and necessary. For mining long-shuttered troves of data for pattern recognitions which human minds, even the sharpest, are rarely able to pick out, a LLM assigned to this is also indispensable. A good example is identifying all possible proteins RNA could template into synthesis. Also, predicting and tweaking city-wide traffic patterns, to better foresee ways to move people about more efficiently and quickly, no matter the hour, is a place where LLMs can help transportation planners and civil engineers.

But having a LLM “app” on one’s desktop supplants or replaces no tool which already exists, and as such merits being an optional/opt-in component of an OS — not something from which one is unable to opt out.



That bridge is one which LLMs have yet to cross.

There is still an inherent bias problem and a drift problem with LLMs which makes neutral fact-checking a goal, but one which hasn’t arrived to 2024. Until then, we may expect more of these stories.



Welp.



Unless there’s a way for an LLM to be programmed without even a faintest whiff of the developer or development team’s own human bias, then this is a Utopian idealization with little hope of realization.



Which means taking LLMs away from a proprietary/closed/black box realm to a completely transparent, examinable, and juried realm in compliance with both published standards (e.g., IEEE) and published protocols/deployments (e.g., ISO).



OK. This is a technical challenge, unrelated to the three social-oriented points preceding it.



Of course, a public — public — LLM is just that: public, like a public utility (like hydro/electricity infrastructure, water/sewage/reservoir infrastructure, transportation infrastructure, and the physical/logical internet).

What I don’t see right now are any of the leading LLM/AI principals suggesting that’s where LLM development should, ultimately, be headed — not so long as Bay/Wall/Bond Street are permitted any leverage in that discussion.

So again, a compulsory LLM client, one dependent upon proprietary and/or for-profit LLMs, integrated into an OS — when that LLM is still chock-full of the above issues and revenue-adjacent liability — is really inventing and putting the cart before the horse (in ecosystems where the horse hasn’t yet been introduced).

As such, it has no place being integrated into an OS — unless of course the objective of that integration is to amplify the aforementioned problems, only at a mass user scale and to yield more negligent misinformation and conscious disinformation by bad (human) actors.

Everything I said was a big 'If' just like how the internet's introduction came with big Ifs that failed
 

Paradoxally

macrumors 68000
Feb 4, 2011
1,981
2,891
Everything I said was a big 'If' just like how the internet's introduction came with big Ifs that failed

They failed for one simple reason: the real world is not a utopia. All those ideals were utopias.

The reality is the internet is just a reflection of our real world selves (we see that with social media), with all virtues and especially all vices.

And greed is the biggest one of all, which has allowed large tech companies to control vast parts of the Internet to their will.

AI is the same: it is trained on our data, hence it will be full of bias. You cannot have a completely neutral LLM.
 
  • Like
Reactions: MacCheetah3

NEPOBABY

Suspended
Jan 10, 2023
697
1,685
They failed for one simple reason: the real world is not a utopia. All those ideals were utopias.

The reality is the internet is just a reflection of our real world selves (we see that with social media), with all virtues and especially all vices.

And greed is the biggest one of all, which has allowed large tech companies to control vast parts of the Internet to their will.

AI is the same: it is trained on our data, hence it will be full of bias. You cannot have a completely neutral LLM.
Theoretically you can if it is trained on a moderated Wikipedia style database in which people can get fired or banned. Real time and news updates are an issue though. Clarity is something historians get not news reporters.
 

MacCheetah3

macrumors 68020
Nov 14, 2003
2,254
1,201
Central MN
Privacy is one thing, but why would you allow the slow degeneration of your brain and thinking skills by giving AI the power to articulate your thoughts?
Um… I won’t go as far as to say it started with… However, the era of social media monetization has showcased just how evident we can “allow the slow degeneration of your brain.” Continuing on this track as an example…
In the 90s they called the internet an information superhighway that will connect everyone, make everyone smarter, all the knowledge of the world at your fingertips [...]
For the most part, it was, even with social media. Platforms such as YouTube were primarily people sharing how-tos and other useful tidbits they discovered (on their own or via a previous share). Platforms such as Twitter were short blogs, providing simple entertainment, inspiration, etc.

Now, the majority of our… Uh… Interactions are exaggerations, outright lies, hypocrisy, and projecting in the name of being completely self-absorbed.

To paraphrase an MIB quote:

A human is very capable. People are dumb, panicky, dangerous animals and you know it.

You have an AI that writes Emails for you so another AI can summarise those emails on the other side?
Sadly, in many instances, those communications are/would be more coherent than what’s typically spewed — in fact, it feels like every thread here has at least one illogical, baseless, parroted “planned obsolescence” claim.

In conclusion, am I worried about "AI"? No.
 
Last edited:

frou

macrumors 65816
Mar 14, 2009
1,377
1,972
In general, lots of software/OS features get trumpeted to the max in marketing, but in reality most people only use them occasionally. Real world use of the AI is not going to look like a never-ending demo video.
 

dburkhanaev

macrumors 6502
Aug 8, 2018
283
155
You're right of course, but people who want to socialize, learn and problem solve the traditional way will continue to do so and if they use an LLM it will only be supplementary like looking up a dictionary or wikipedia.

Then there are the other folks.

In the 90s they called the internet an information superhighway that will connect everyone, make everyone smarter, all the knowledge of the world at your fingertips, bring down the cost of education.

Those predictions largely failed. Ignorance, conspiracy theories and religious extremism has not subdued. Governments have learned how to weaponize tweets to make their followers behave in even more extreme ways.

Education is more costly than ever, driving many into debt.

So there is a space for LLMs. If a politician or religious fanatic wants to post something wild on the internet and an 'AI' fact checks them automatically then those people become a laughing stock.

But a public LLM must absolutely have several features to do that:

1. Reliability.
2. Independence and not being politically correct towards someone's political/religious/economic ideology. It must be able to call out anything that has either failed, will fail, is mythological, and isn't grounded in reality.
3. International accord/universal standards.
4. Very energy efficient inference chips, not the badly inefficient GPUs Nvidia ships that are partly derived from gaming GPUs. They must use designs that are highly specialized.
We are already seeing problems with LLM ML tools:
1. Reliability: LLM ML "hallucinates" at best when it's being unreliable and lies and makes up things as you move toward the worst end of the stick. There can't be fact-checking if the LLM ML application doesn't know the facts itself. At the absolute worst end of the scale, they use scary or threatening language.
2. Independence: You don't have to wait for the government to put its finger on that scale. Gemini is made by Google. That's a private company, but you know the political bent of their engineers because Gemini's generative art feature didn't create black WW2 soldiers from Germany, of its own accord; this is a form of social engineering. If the engineers can't be impartial, the government actors certainly aren't solely to blame.
3. International accord/standards: Set by whom? The U.N.? Something from the E.U.? Supranational governmental bodies are non-elected and held accountable to no one, setting the standards by an LLM Manipulator In the Box. Nothing could go wrong there.
4. Energy efficient: I think ASi chips are the coolest and most energy efficient. And entire data centers of these spec'd just for ML LLM tools would still have a large environmental impact.

I think there are places and uses for ML, even the LLM variety. But I think we have to stop shoving and shoe-horning it into every device, website, and application because its there. If we can't target where it's useful and focus, then we will keep getting a crumbier Google search (it's going downhill fast), a crappier shopping experience (I'm looking at you Amazon [RUFUS]), and a suite of heavier and glitchier software from publishers like Microsoft who think everyone wants a Copilot.

But my prediction is that it's a huge bubble and it will pop. That will end this AI in everything non-sense and maybe I can send messages to friends in messenger apps without AI busy bodies popping in to add their 2 cents.
 
  • Like
Reactions: zevrix

Macalway

macrumors 601
Aug 7, 2013
4,067
2,700
But my prediction is that it's a huge bubble and it will pop. That will end this AI in everything non-sense and maybe I can send messages to friends in messenger apps without AI busy bodies popping in to add their 2 cents.

It may not be a bubble that bursts, but something that just dies out slowly (or fast) for lack of interest. Everyone became afraid of missing out. Lots of time and money lost. But what's really got to me, is a dislike of AI and all things associated with it, as it cost me money. I recently bought a laptop with Copilot+ which promised me a lot of things and delivered zilch. I'm not happy about the money I lost on the deal. It sort of sums up the whole mess.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.