Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

0339327

Cancelled
Original poster
Jun 14, 2007
634
1,936
When Chat GPT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
 
Last edited:

dmr727

macrumors G4
Dec 29, 2007
10,698
5,993
NYC
The question of whether AI is a threat to society is complex and multifaceted, and opinions on this topic vary widely. Here are some considerations:
  1. Potential for Job Displacement: One concern is that AI and automation technologies could lead to significant job displacement, particularly in industries where repetitive or routine tasks can be automated. This could result in economic disruption and societal challenges if adequate measures aren't taken to reskill and retrain the workforce.
  2. Ethical Concerns: AI systems can perpetuate or even exacerbate existing societal biases if not designed and implemented carefully. Issues such as algorithmic bias, fairness, accountability, and transparency need to be addressed to ensure that AI systems do not discriminate or cause harm to certain groups of people.
  3. Privacy and Surveillance: The increasing use of AI for surveillance purposes raises concerns about privacy infringement and the erosion of civil liberties. Technologies such as facial recognition and predictive analytics can be powerful tools for law enforcement and security agencies, but they also raise significant ethical and legal questions regarding individual rights and freedoms.
  4. Security Risks: AI systems are vulnerable to exploitation and misuse by malicious actors. This includes the potential for cyberattacks on AI systems themselves, as well as the use of AI to enhance the capabilities of cybercriminals and hostile state actors.
  5. Economic and Geopolitical Impact: The rise of AI has the potential to reshape global power dynamics and economic competitiveness. Countries and companies that lead in AI research and development stand to gain significant advantages, which could exacerbate existing inequalities and geopolitical tensions.
However, it's important to note that AI also offers numerous potential benefits to society, including improved healthcare outcomes, more efficient resource allocation, enhanced productivity, and new opportunities for innovation and creativity. Ultimately, whether AI is perceived as a threat to society depends on how it is developed, regulated, and deployed, as well as the broader social, political, and economic contexts in which it is used. It's essential for policymakers, technologists, ethicists, and society as a whole to engage in informed and responsible discussions about the opportunities and challenges associated with AI.
 

eyoungren

macrumors Penryn
Aug 31, 2011
29,656
28,433
When Chat GBT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
Different generation, different subject, same problem.

I'm Gen-X. I had a lot of teachers that gave failing grades on term papers when they discovered that Cliffs Notes had been used.

When Wikipedia first rolled around, teachers were up in arms.

AI doing your homework? What's the difference?

The next generation will always find the next thing to 'solve' their problems - those problems being how to get away with doing the least amount of work and putting in the least amount of effort.

This is nothing new. Cliffs Notes and Wikipedia did not end society. AI won't either.
 

jedimasterkyle

macrumors 6502a
Sep 27, 2014
581
888
Idaho
 

Rafterman

Contributor
Apr 23, 2010
7,267
8,809
AI is only a concern if you let it be. Don't put it in charge of nuclear weapons or anything else where it can kill us. But for research, doing tedious work, etc, then its a great thing.

The only issue I see is AI fakery - AI being used to create fake things politicians or famous people say, causing harm. Or, for example, someone makes a video will, but its faked, and not the true wishes of the newly deceased. Things like that.
 

bousozoku

Moderator emeritus
Jun 25, 2002
16,124
2,403
Lard
As a software developer, I dabbled in creating programs to automate investigating and changing other program's source code. Now, AI is being used to do something similar.

Once AI is much better at learning, it will be able to be a threat. AI is like a child, and if you can teach a child to steal, what can AI do at this point?

If hackers can teach AI how to hack, what financial institution will be safe and what government will be safe?
 

0339327

Cancelled
Original poster
Jun 14, 2007
634
1,936
Different generation, different subject, same problem.

I'm Gen-X. I had a lot of teachers that gave failing grades on term papers when they discovered that Cliffs Notes had been used.

When Wikipedia first rolled around, teachers were up in arms.

AI doing your homework? What's the difference?

The next generation will always find the next thing to 'solve' their problems - those problems being how to get away with doing the least amount of work and putting in the least amount of effort.

This is nothing new. Cliffs Notes and Wikipedia did not end society. AI won't either.

I think there is a difference between using technology for research and having technology do the work for you.

I get that each generation has their own struggles, I am concerned that we are literally leaning on technology as a replacement for actual effort.
 

antiprotest

macrumors 601
Apr 19, 2010
4,353
16,038
When Chat GBT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
Your son asked Siri?!

Did Siri actually offer an answer or did it just do another "here's what I found on the web"?
 

Chuckeee

macrumors 68040
Aug 18, 2023
3,075
8,754
Southern California
The question of whether AI is a threat to society is complex and multifaceted, and opinions on this topic vary widely. Here are some considerations:
  1. Potential for Job Displacement: One concern is that AI and automation technologies could lead to significant job displacement, particularly in industries where repetitive or routine tasks can be automated. This could result in economic disruption and societal challenges if adequate measures aren't taken to reskill and retrain the workforce.
  2. Ethical Concerns: AI systems can perpetuate or even exacerbate existing societal biases if not designed and implemented carefully. Issues such as algorithmic bias, fairness, accountability, and transparency need to be addressed to ensure that AI systems do not discriminate or cause harm to certain groups of people.
  3. Privacy and Surveillance: The increasing use of AI for surveillance purposes raises concerns about privacy infringement and the erosion of civil liberties. Technologies such as facial recognition and predictive analytics can be powerful tools for law enforcement and security agencies, but they also raise significant ethical and legal questions regarding individual rights and freedoms.
  4. Security Risks: AI systems are vulnerable to exploitation and misuse by malicious actors. This includes the potential for cyberattacks on AI systems themselves, as well as the use of AI to enhance the capabilities of cybercriminals and hostile state actors.
  5. Economic and Geopolitical Impact: The rise of AI has the potential to reshape global power dynamics and economic competitiveness. Countries and companies that lead in AI research and development stand to gain significant advantages, which could exacerbate existing inequalities and geopolitical tensions.
However, it's important to note that AI also offers numerous potential benefits to society, including improved healthcare outcomes, more efficient resource allocation, enhanced productivity, and new opportunities for innovation and creativity. Ultimately, whether AI is perceived as a threat to society depends on how it is developed, regulated, and deployed, as well as the broader social, political, and economic contexts in which it is used. It's essential for policymakers, technologists, ethicists, and society as a whole to engage in informed and responsible discussions about the opportunities and challenges associated with AI.
Perhaps I’m just being a bit paranoid, but this whole response reads like it was AI generated
 

eyoungren

macrumors Penryn
Aug 31, 2011
29,656
28,433
I think there is a difference between using technology for research and having technology do the work for you.
I suppose that's open to interpretation then.

Not a lot of using technology for research to go down to the book store, pick up a Cliffs Notes and then plagarize the heck out of it. Even easier to just copy/paste from Wikipedia.

But, I suppose if you want to count driving to a store, using cash or a card to pay and then using a pen, typewriter or old computer to type things in word for word as 'research', then…sure.
 
  • Like
Reactions: drrich2

0339327

Cancelled
Original poster
Jun 14, 2007
634
1,936
I suppose that's open to interpretation then.

Not a lot of using technology for research to go down to the book store, pick up a Cliffs Notes and then plagarize the heck out of it. Even easier to just copy/paste from Wikipedia.

But, I suppose if you want to count driving to a store, using cash or a card to pay and then using a pen, typewriter or old computer to type things in word for word as 'research', then…sure.

I think there’s a difference between looking up information from a variety of sources, compiling the relevant information and then summarizing it for your use versus asking Chat GPT to “write me an essay”.
 
Last edited:
  • Like
Reactions: Scepticalscribe

picpicmac

macrumors 65816
Aug 10, 2023
1,247
1,845
Ok, so please don't take this as an attack... but the tool is called ChatGPT.

I guess this makes me that guy...

Anyway, ChatGPT is a tool that I think is not the most scariest. Manipulating images and videos will be (perhaps already is) more destructive to society as we humans value what we see above almost anything else.
 

Mac47

macrumors regular
May 25, 2016
240
417
The question of whether AI is a threat to society is complex and multifaceted, and opinions on this topic vary widely. Here are some considerations:
  1. Potential for Job Displacement: One concern is that AI and automation technologies could lead to significant job displacement, particularly in industries where repetitive or routine tasks can be automated. This could result in economic disruption and societal challenges if adequate measures aren't taken to reskill and retrain the workforce.
  2. Ethical Concerns: AI systems can perpetuate or even exacerbate existing societal biases if not designed and implemented carefully. Issues such as algorithmic bias, fairness, accountability, and transparency need to be addressed to ensure that AI systems do not discriminate or cause harm to certain groups of people.
  3. Privacy and Surveillance: The increasing use of AI for surveillance purposes raises concerns about privacy infringement and the erosion of civil liberties. Technologies such as facial recognition and predictive analytics can be powerful tools for law enforcement and security agencies, but they also raise significant ethical and legal questions regarding individual rights and freedoms.
  4. Security Risks: AI systems are vulnerable to exploitation and misuse by malicious actors. This includes the potential for cyberattacks on AI systems themselves, as well as the use of AI to enhance the capabilities of cybercriminals and hostile state actors.
  5. Economic and Geopolitical Impact: The rise of AI has the potential to reshape global power dynamics and economic competitiveness. Countries and companies that lead in AI research and development stand to gain significant advantages, which could exacerbate existing inequalities and geopolitical tensions.
However, it's important to note that AI also offers numerous potential benefits to society, including improved healthcare outcomes, more efficient resource allocation, enhanced productivity, and new opportunities for innovation and creativity. Ultimately, whether AI is perceived as a threat to society depends on how it is developed, regulated, and deployed, as well as the broader social, political, and economic contexts in which it is used. It's essential for policymakers, technologists, ethicists, and society as a whole to engage in informed and responsible discussions about the opportunities and challenges associated with AI.

This post itself has to have been generated with AI.
 

drrich2

macrumors 6502
Jan 11, 2005
447
327
Anything that substantial empowers human and has an uneven rollout has the potential to be disruptive and 'dangerous' in a society or between societies. Imagine what impact some of these developments may've had in their early times:

1.) The written word.
2.) Domestication and riding of horses.
3.) Guns (pretty much obsoleted armored knights, IIRC).
4.) Cannons (nice for battering castle walls).
5.) The Internet.
6.) Social Media.
7.) Smart Phones.

A.I. will enable deep fakes (not just porn), false allegations, planning crimes and a host of other nefarious agendas.

As with any strong empowering agent, it enables and magnifies what is in the hearts and minds of human beings.

So of course it's dangerous.

And just like those other things, we're still going to do it, however much hand-wringing the Luddites do along the way.
 
  • Like
Reactions: Populus

Populus

macrumors 603
Aug 24, 2012
5,986
8,449
Spain, Europe
Short answer is: yes, of course, it will eventually be. At least in my opinion. The intelligence of this AIs is growing at an exponential rate.

Now I don’t feel like to delve into the threats it may suppose to our society as we know it, because I can’t be typing a lot at the moment (I’ll probably come back to the thread tomorrow), but I’ll leave your with this interesting interview:


EDIT: the previous comment made some interesting points in line of what I think. Just imagine a world where you can no longer believe what your eyes see, what your ears hear. It’s pretty much disruptive, and not in a good way.
 
  • Like
Reactions: -DMN-

boss.king

Suspended
Apr 8, 2009
6,394
7,648
When Chat GBT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
I think generative AI/LLM tools will have a pretty significant negative effect on society, but the school plagarism example doesn't actually worry me. Kids have always been able to cheat. This is just a new form of cheating that's going to need it's own countermeasures (bring back in-person, hard-copy monitored tests, for example).

Where I think AI is going to do real damage is with art, journalism, and employment in general.

There's a growing obsession with the concept of productivity that disregards the quality of the final product, and these tools do nothing to combat that. They flatten and homogenise creation to the lowest common denominator and turn jobs from doing actual work to babysitting untrustworthy tools that crank out garbage quickly, then using those same tools to decipher and convert that garbage back into useless bullet points on the other end.

At the same time, workers are laid off while the remaining staff have to pick up the slack that these tools leave behind. Resources are burned pursuing largely non-beneficial AI tools that could go to making meaningful improvements to all sorts of things. And at the end of all of that, once people are fired and creative works are devalued and businesses are churning out crap at an alarming rate, these things don't even make any money.

To be clear, I'm not saying all machine learning is bad, but the Chat-GPTs, Bards, Geminis, Bings, and Copilots of the world are absolutely a net negative.
 

eyoungren

macrumors Penryn
Aug 31, 2011
29,656
28,433
I think there’s a difference between looking up information from a variety of sources, compiling relevant information and then summarizing it for your use and asking Chat GBT to “write me an essay”.
That's fine. As for me, I see no difference from that and plagarizing - which is what a lot of people did with Cliffs Notes. And teachers got quite good at recognizing direct copy/paste from Wiikipedia.

You may call that looking up relevant information. I call that plagarizing and taking the cheap way out for a grade. It's particularly annoying knowing that I did my research while fellow students simply wrote down everything from Cliffs Notes and didn't read the amount of books required.

But call it what you want.
 
  • Like
Reactions: Chuckeee

Pakaku

macrumors 68040
Aug 29, 2009
3,273
4,844
AI is a tool that is misaligned in a world full of very evil people who only want to exploit it for themselves

Between all of the deep-fakes, plagiarism, art theft… yes, it’s all awful, and needs regulation, but AI doesn’t do that on its own because it can’t think. It’s not intelligent, it’s just an algorithm. Just like how a gun isn’t intelligent, it’s just a weapon. Evil people use AI and weapons for evil purposes. The people designing generative AIs are also complicit in those evils…
 
Last edited:
  • Like
Reactions: Matz

HiVolt

macrumors 68000
Sep 29, 2008
1,764
6,238
Toronto, Canada
IMO it's a threat to what people will perceive as true information. It's bad enough we deal with massive online misinformation on a huge variety of issues.

AI is not artificial intelligence. it operates on what it's been trained on, and who trained and with what agenda.

The Google Gemini is a prime example.
 

0339327

Cancelled
Original poster
Jun 14, 2007
634
1,936
Ok, so please don't take this as an attack... but the tool is called ChatGPT.

I guess this makes me that guy...

Anyway, ChatGPT is a tool that I think is not the most scariest. Manipulating images and videos will be (perhaps already is) more destructive to society as we humans value what we see above almost anything else.
Corrected and agreed.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,362
12,612
Different generation, different subject, same problem.

I'm Gen-X. I had a lot of teachers that gave failing grades on term papers when they discovered that Cliffs Notes had been used.

When Wikipedia first rolled around, teachers were up in arms.

AI doing your homework? What's the difference?

The next generation will always find the next thing to 'solve' their problems - those problems being how to get away with doing the least amount of work and putting in the least amount of effort.

This is nothing new. Cliffs Notes and Wikipedia did not end society. AI won't either.

This.

There have been an endless parade of innovations that society tries to abuse before figuring out how to use them as proper tools.

Spell check, digital calculators, calculators that can do algebra and calculus and then show you the steps, Google, just the internet in general, freaking auto-tune and musical align-to-grid tools.

People had older siblings to help them cheat long before ChatGPT.

Some of the potential for deep fakes and the like are a bit concerning in their ability to create a plausible alternate reality, but truth is when I first saw the old B/W Godzilla movies as a kid (long after initial release, mind you) they scared the bejeezus out of me but I look at them now and they're ludicrously animated. It took time for me to refine my perception. Before Godzilla, Stalin was photoshopping enemies out of photographs the analog way.

ELIZA had people just as freaked out at the time as ChatGPT does now.

Imagine how painters felt about the first photographs.

Most of what this generative stuff is doing right now is novelty with a little bit of value. We'll eventually figure out how to use it as a tool and less of a toy. In the mean time, cheaters gonna cheat.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.