Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apologies for not reading through the thread so I may be repeating other comments.

AI is a tool, and all tools merely amplify function. So whether they are constructive or destructive depends on entirely how it is used by people. (I would even include weapons as tools because I would consider defense constructive.)

That said, some tools are extremely powerful. You don’t let a child drive a car, and you don’t let an individual own a nuke. So when tools are powerful enough they require regulation to minimize people getting hurt.

Seeing AI in action and its potential, I think most would agree it fits into this category. So then I think the question we really have to answer is- How much and what kind of regulation do we need? Can we even effectively regulate AI? One distinction about AI is how potentially accessible it is (or can be), much like the internet itself.

This is definitely a needed discussion right now.
 
It wouldn't be if it had been put to the proper use.

The whole point of AI was to enslave it to do boring menial personal jobs like file your taxes or optimising your heating schedule based on weather patterns so we then had more time to invest in creative pursuits. Instead we taught it creative pursuits and left ourselves with all the boring jobs.

If we're being honest with ourselves the end of humanity won't come at the hands of the T1000 but self-service checkouts.
 
Lets be realistic here, what we have here is not AI, it's a bunch of equations that you shovel garbage and weights into and something that sounds or looks feasible pops out of the end of it.

This has value and risk associated with it. At the moment people are selling the value and trying to sweep the risk under the carpet. The risks are mostly financial: the training cost of the models and the cost of running the models. The entire industry is betting on technology investment being able to sweep both of these concerns away and produce a viable product from it. But that assumes the product has value. Past initially playing with it there's a taper-off on value which suggests that a break even situation may not occur. At which point the whole industry is dead. There are of course secondary risks, such as misleading responses and inappropriate uses of the technology which will be dealt with by regulatory bodies eventually.

However the hype riding investors are bailing already. That says where this is really going. They pumped up the valuations as it was an emerging market which is what gave people so much faith in it.

So we will be entering, what is it now, the third AI winter soon which was the result of overstating the value and claims and hiding the risks again.

The death knell will be Microsoft will remove CoPilot from Windows in 2027 because no one wanted it, paid for it or used it. Apple will have some light integration that remains in small but appropriate cases only.
 
I think it’s an issue for the reasons you mentioned in terms of cutting corners in education for students. I’d worry about the quality of medical and engineering students in the future if they used AI to pass exams etc.

My biggest concern society is social media as I think it’s ruined the ability for young people to communicate properly. There 4 people in our office under the age of 25 and lovely and chatty on teams chats and on email but in person they are so socially awkward! They refuse to ring customers as well or try to avoid ringing them I should say.
 
AI is dead in the water without the massive data bases of information and training data they are subjected to; so is it really AI or just very highly advanced search and retrieve? Natural language parsing has always fascinated me (remember the super old text adventure games like Zork.); but I wonder how some of these things would work only having access to the data they have acquired since they were fired up.

Personally I think this is going to lead to the even further dumbing down of society.
 
  • Like
Reactions: Chuckeee
Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.
Yeah, instead of spending hours searching Google for questionable sources of information, the youth of today just get their questionable information near-instantly from an AI. 😤

They should do their homework the old fashioned way: bribe the school nerd to give them the answers. Or maybe use Cliffs Notes, if those still exist. These kids and their fancy AI... in my day, we didn't need technology to blow off doing our homework or weaseling our way out of "properly" studying! 👴

(serious answer: AI isn't any more of a threat to society than smartphones, or YouTube, or widespread Internet access, or television, or the printing press, or any other countless technologies used to spread information or misinformation. But writing articles about "AI safety," as well as general fear-mongering about new technology, is a great way to get clicks/views and make money, hence why so many people do it.)
 
The threat is AI will replace human workforce in pretty much every field. To me it’s the equivalent of replacing animals with machines in the early Industrial Revolution.

As in every revolution those who don’t embrace these radical changes will be left behind whether they like AI or not.
 
  • Like
Reactions: iHorseHead
If it actually existed, maybe. I am highly skeptical of any claims that GAI will ever really exist. There's a qualitative gap between the way our brains think and the way computers "think" that seems unbridgeable. Marketing terms for LLMs and processing units pretend the computers are actually "thinking" in the same way as our neurons, but they simply aren't. Even quantum computers, if they can even be made useful, won't be any closer to actual AI.
 
  • Like
Reactions: Ruggy
The threat is AI will replace human workforce in pretty much every field. To me it’s the equivalent of replacing animals with machines in the early Industrial Revolution.

As in every revolution those who don’t embrace these radical changes will be left behind whether they like AI or not.
I read somewhere that the new "big job" will be how to properly explain what you want to the AI, so it spits out the correct stuff.

I also agree that the image generating AI stuff sort off goes against the human condition of being creative. However some of the "AI" (in quotes because is it really AI) tools coming out are actually fairly helpful.
 
  • Like
Reactions: radellaf
When Chat GPT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
There has been a lot of public discourse about the benefits and dangers of AI. There is one guarantee: it will be used for evil, and it will be used for good, just like any technology. I’m most worried about the further unraveling of society if we cannot agree anymore on what is true and what is patently false. 🤷🏻
 
It wouldn't be if it had been put to the proper use.

The whole point of AI was to enslave it to do boring menial personal jobs like file your taxes or optimising your heating schedule based on weather patterns so we then had more time to invest in creative pursuits. Instead we taught it creative pursuits and left ourselves with all the boring jobs.

If we're being honest with ourselves the end of humanity won't come at the hands of the T1000 but self-service checkouts.

It's almost like techies don't want AI doing their jobs, so they keep emphasizing how AI will make creatives obsolete: no more writers and artists, we don't need those anymore, but we still need coders! AI can never learn to code! Oh wait...
 
If it actually existed, maybe. I am highly skeptical of any claims that GAI will ever really exist. There's a qualitative gap between the way our brains think and the way computers "think" that seems unbridgeable. Marketing terms for LLMs and processing units pretend the computers are actually "thinking" in the same way as our neurons, but they simply aren't. Even quantum computers, if they can even be made useful, won't be any closer to actual AI.
It’s only a matter of time IMHO. The number of nodes in our brains will fairly soon be overtaken by silicon equivalents, at speeds far exceeding our thinking capabilities. Problem is: how would be even begin to comprehend what the machine is thinking? To an AI our way of thinking might be utterly insane!
LLMs are only the beginning I think, first steps. There is no fundamental reason why silicon cannot become intelligent, or worse: self aware.
 
I read somewhere that the new "big job" will be how to properly explain what you want to the AI, so it spits out the correct stuff.

I also agree that the image generating AI stuff sort off goes against the human condition of being creative. However some of the "AI" (in quotes because is it really AI) tools coming out are actually fairly helpful.

AI is trained on all human history, literature, art, religions, politics, science that’s why it’s so powerful.

Now it’s text to videos. This stuff is the equivalent of the Internet in its infancy. In a few years it will be very hard to tell a real life footage from the one that is computer generated. IMO stock photography or videography as a profession have no future.

 
I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Siri is fairly good at grade school math and definitions.
How is asking Siri any different than asking yahoo or Google 25 years ago?
Especially for simple definitions.
I mean, you’re barely talking about AI.
Grade school math? You can just use a calculator, really no different.
People of all ages have been using calculators for literal decades, it’s not the downfall of society
 
This morning I was rather dismayed when I discovered that an article about google’s new gemini AI app’s apparent racial bias is actually serious. I thought it was a joke.

This would be funny if it didn’t actually happen.

 
  • Haha
Reactions: VisceralRealist
This morning I was rather dismayed when I discovered that an article about google’s new gemini AI app’s apparent racial bias is actually serious. I thought it was a joke.

This would be funny if it didn’t actually happen.

Racist AI (or the opposite extreme) 😂 who saw that coming. Well, I didn’t anyway haha
 
Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.
Now I get why they showed me all those math problems at school, when a calculator could solve most of them.
The older generation wanted us to learn the process that hides in between the problem and the result. And now we are this generation of parents who want our children to learn and understand how AI is made, and how to solve problems by yourself without it.

Using AI all the time will make us lazy and dumber by consequence.

I for one am a bit guilty... I do use it a lot, to write code for instance, and my god have I been surprised how many things I learned with it. But before integrating the code in my app, I read it, try to understand it, ask for more comments if needed, and I always ask myself the question "does it seem to do what I intend to do?". If I don't understand it (happened to me once...) then I do a bunch of unit tests on it to see if it ends up failing at some point.
 
I was thinking it would mostly just create annoying spam and trick a few gullible people, but then I saw the promo page for Sora where they casually generated fake historical footage. 16:9 meaning you could crop the watermark out by making it the expected 4:3, made me realize it’s going to be so much worse than I expected. Also consider that for instagram reels/youtube shorts/tik tok, videos like this would be cropped to 1:1 or even 9:16. it’s going to be rampant. The companies pushing these services have zero consideration for the fact they are creating a post truth world.

1708712697411.png


”But metadata!!” makes no sense as an argument here because why would footage from the 1900s have “not AI generated“ metadata attached? How can you prove it wasn’t retroactively added to fake footage? How could you prove “AI Generated“ metadata wasn't removed from an AI video? If this is moderated, who decides what truth is?

The concept of a more advanced one that could actually replace the majority of jobs is another level. It could be very good for society, but any solution to the problem of tons of people being unemployed and there physically not being enough jobs will sound too much like communism for anyone in charge to like. Would get pretty dicey for a while.
 
Last edited:
  • Like
Reactions: gusmula
Perhaps I’m just being a bit paranoid, but this whole response reads like it was AI generated
Heck yeah. "perpetuate or even exacerbate existing societal biases if not designed and implemented carefully. Issues such as algorithmic bias, fairness, accountability, and transparency need to be addressed to ensure that AI systems do not discriminate", semantically precise, grammatically correct and spelled right? You bet it's an AI response... I figured that was the joke.
 
Yeah, instead of spending hours searching Google for questionable sources of information, the youth of today just get their questionable information near-instantly from an AI. 😤

They should do their homework the old fashioned way: bribe the school nerd to give them the answers. Or maybe use Cliffs Notes, if those still exist. These kids and their fancy AI... in my day, we didn't need technology to blow off doing our homework or weaseling our way out of "properly" studying! 👴

(serious answer: AI isn't any more of a threat to society than smartphones, or YouTube, or widespread Internet access, or television, or the printing press, or any other countless technologies used to spread information or misinformation. But writing articles about "AI safety," as well as general fear-mongering about new technology, is a great way to get clicks/views and make money, hence why so many people do it.)

My concern is that AI is removing the thinking part of research. When you Google a question, it often requires several websites and cross-referencing the answers. You also list those sites in a bibliography which further obligates one to research well respected web sites.

In comparison, AI just spits out an answer and the student, or the employee, just rewrites it. I remember being in school when we switched from libraries to internet. The instructor was OK with internet so long as there were a variety of respected sources.


There has been a lot of public discourse about the benefits and dangers of AI. There is one guarantee: it will be used for evil, and it will be used for good, just like any technology. I’m most worried about the further unraveling of society if we cannot agree anymore on what is true and what is patently false. 🤷🏻

This is absolutely my main concern. People are talking GPT answers as if they are G-D's truth and facts have shown that AI gets it wrong a larger part of the time. Also, as was recently shown with Google, AI is subject to political motivations as it can be programed to put a spin on the answers.

How is asking Siri any different than asking yahoo or Google 25 years ago?
Especially for simple definitions.
I mean, you’re barely talking about AI.
Grade school math? You can just use a calculator, really no different.
People of all ages have been using calculators for literal decades, it’s not the downfall of society

Siri doesn't require vetting or reading or cross-referencing. Our school doesn't permit calculators, until pre-algebra. The process is very important, especially in long devision and fractions.

In general, I'm all for learning help, but vastly against, learning replacement. My concern overall is the dumbing down of society into brats who don't know how to do anything other than watching videos.
 
  • Like
Reactions: gusmula
The question of whether AI is a threat to society is complex and multifaceted, and opinions on this topic vary widely. Here are some considerations:
  1. Potential for Job Displacement: One concern is that AI and automation technologies could lead to significant job displacement, particularly in industries where repetitive or routine tasks can be automated. This could result in economic disruption and societal challenges if adequate measures aren't taken to reskill and retrain the workforce.
  2. Ethical Concerns: AI systems can perpetuate or even exacerbate existing societal biases if not designed and implemented carefully. Issues such as algorithmic bias, fairness, accountability, and transparency need to be addressed to ensure that AI systems do not discriminate or cause harm to certain groups of people.
  3. Privacy and Surveillance: The increasing use of AI for surveillance purposes raises concerns about privacy infringement and the erosion of civil liberties. Technologies such as facial recognition and predictive analytics can be powerful tools for law enforcement and security agencies, but they also raise significant ethical and legal questions regarding individual rights and freedoms.
  4. Security Risks: AI systems are vulnerable to exploitation and misuse by malicious actors. This includes the potential for cyberattacks on AI systems themselves, as well as the use of AI to enhance the capabilities of cybercriminals and hostile state actors.
  5. Economic and Geopolitical Impact: The rise of AI has the potential to reshape global power dynamics and economic competitiveness. Countries and companies that lead in AI research and development stand to gain significant advantages, which could exacerbate existing inequalities and geopolitical tensions.
However, it's important to note that AI also offers numerous potential benefits to society, including improved healthcare outcomes, more efficient resource allocation, enhanced productivity, and new opportunities for innovation and creativity. Ultimately, whether AI is perceived as a threat to society depends on how it is developed, regulated, and deployed, as well as the broader social, political, and economic contexts in which it is used. It's essential for policymakers, technologists, ethicists, and society as a whole to engage in informed and responsible discussions about the opportunities and challenges associated with AI.
You definitely used AI to write this.
 
  • Like
Reactions: Matz and dmr727
In a far future, I see AI replacing all humans for mundane works, allowing humans to focus solely on being happy and pushing their boundaries. It's a very dystopian thought though. Imagine being a pet to a bunch of robots. But then again, it's a future where people might not need to prove their worth to societies to survive. Until then, we're in the beginning of a harsh transition where people who cannot prove their worth to societies will fail.

I was once asked by ChatGPT whether or not AI would entirely replace humans. Its answer was that we should strike a balance where both sides can work together.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.