Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't think it's a threat to society. It's disruptive. A lot of things are going to change in the next ten years as a result of ChatGPT and Gemini, which are still not full-blown "artificial general intelligence" that will be even more disruptive when it comes. The economy is going to be forced to reorganize now that artificial intelligence has crossed a boundary where it can now take over fairly complex tasks that people used to have to do manually.

I have long-term concerns about students growing up without ever writing a paper of their own from scratch. People like to say that the introduction of calculators didn't destroy math education, so why should the introduction of ChatGPT destroy writing education? Have those people taken a look at how dismal our math scores have gotten in the past few decades? In fifty years, how good will the average adult be at formulating a cogent argument and brainstorming without a connection to the internet? With the rapid pace of change, these concerns will probably seem quaint at that time, but there's still the potential for something fundamental to be lost. Since societies are always changing and the past is a foreign country, I still don't think AI is any more of a threat to society than time itself.
 
  • Like
Reactions: gusmula
My concern is that AI is removing the thinking part of research. When you Google a question, it often requires several websites and cross-referencing the answers. You also list those sites in a bibliography which further obligates one to research well respected web sites.

In comparison, AI just spits out an answer and the student, or the employee, just rewrites it. I remember being in school when we switched from libraries to internet. The instructor was OK with internet so long as there were a variety of respected sources.
ChatGPT doesn't do this, but both Microsoft Copilot / Bing Chat and Perplexity.ai (and presumably any other AI that uses RAG - Retrieval-Augmented Generation) will list sources for whatever they say, and so you can see if the source of their information is legitimate.

So if you're concerned that your child might just start blindly believing whatever a chatbot says, I would suggest either encouraging him to use AI that shows sources/references for information, OR encouraging him to spend a lot of time using chatbots: he'll learn on his own (very quickly, depending on how much he experiments with them) to not blindly believe an AI.

(In fact, I would say that everyone should try this; the more you experiment with AI, the more you see for yourself where the limits are. Whether you have high hopes for it or whether you're afraid of it, the truth is that AI is still very limited and we're very far off from the sci-fi utopia/dystopia people like to claim is coming.)
 
  • Like
Reactions: AlmightyKang
Buying the hype enough to hand over government services to half-baked applied statistics (“AI”) would be the end of what we consider a society today, definitionally.

That doesn’t mean the end of the world, but it certainly means less accountability and a ******** life for us normal people when your electric is shut off because “the computer said so” incorrectly. Good luck to whoever an eventual AI system has decided, against fact, to sign off on your death certificate. Try getting that corrected when every other agency you’re supposed to turn to is AI based and rejects your requests for help because their database already says you’re dead.

We already see the bureaucratic nightmare in the US where insurance companies are just “AI”-ing claims regardless of the fact that it doesn’t work.

Extrapolate that as you will…
 
  • Like
Reactions: gusmula
Buying the hype enough to hand over government services to half-baked applied statistics (“AI”) would be the end of what we consider a society today, definitionally.

That doesn’t mean the end of the world, but it certainly means less accountability and a ******** life for us normal people when your electric is shut off because “the computer said so” incorrectly. Good luck to whoever an eventual AI system has decided, against fact, to sign off on your death certificate. Try getting that corrected when every other agency you’re supposed to turn to is AI based and rejects your requests for help because their database already says you’re dead.

We already see the bureaucratic nightmare in the US where insurance companies are just “AI”-ing claims regardless of the fact that it doesn’t work.

Extrapolate that as you will…

Interestingly this is one of the things that might actually kill the current ML craze. To make auditable decisions you need to record the entire state of the system and it needs to be deterministic. That is not necessarily true and to keep all state would mean storing the entire state of the model and the inputs at the time of execution.

We tried using various ML solutions for financial risk modelling and no models passed our test suite for making a capable and informed decision about risk. One of the finest screw ups it made was a terrorist laundering scenario it thought looked like a low risk business deal. There was not possible to even consider as realistic after pumping hundreds of thousands of rule verified training data sessions into it. Total disaster. It literally hallucinated the output.
 
Does AI have the capacity to be a threat?

It sure does, humans made it. It has the capability to be as dangerous as the humans that "train" it and eventually will have the capability to surpass us. Look at the mess Gemini is now because of the narrative and agenda "trained" into it by its creators.
 
Last edited:
  • Like
Reactions: gusmula and Matz
When Chat GPT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
The industrial revolution triggered upheaval in goods and service production and consumption, effectively a collapse of cottage industry and the dependent livelihoods. The harm was real. Millions of such practitioners were impoverished. Back then, Millions was a really whole lot. Profit models shifted from margin to volume, and having done so, business goals shifted from sustainability to growth. That spurred and then dovetailed with the expansion of mechanized transportation. In short: More people, in more places, could have stuff, and industry was then obliged to push for returns to investors. And a poverty trap was sprung, as people were locked into mortgaging their own welfare to acquire any goods or services for which they couldn't simply pay or barter. We see this today with cars, homes, communications gadgets and medical care, though it could be argued that never in human history have poor people been entitled to good stuff. And we know how things end for the "From each according to his ability, to each according to his needs," crowd.

AI might be today's parallel. We are all aware that learning things, planning things, designing things and improving things - the very elements of industrialization - are endeavors for intelligent, talented, motivated people. Such people have traditionally represented expandable assets, which industry has traditionally expected to procure and capitalize to the extent of profitability. Much as skilled woodworking, knitting and smithing were decimated as livelihoods, so will be routine information work. There isn't a manager on Earth who wouldn't rather have copy written, data sets queried, trends spotted and reports generated by a compliant AI, rather than a whiney little ***** who's late all the time and distracted with their petty, personal ******** all day. Had I remained dedicated to photography and graphic design, I'd have starved to death (except that I also have horses, so, you know...)

That said, as it's pimped out now, AI is either not genuine intelligence, or has been molded for sale to the extent that it doesn't matter if it is. Today's GPTs and GANs are merely a saleable imitations of smart people, except they can't cross the street to get away from you, and won't be overtly mean to you for bothering it with your aforementioned petty personal ********. The machines that produce AI work product today, are entirely deterministic - large and complex, but still merely statistical models regurgitating existing (stolen) content, resulting from code that must resolve to binary outputs, because it still executes on transistors. That said, bold face lies, misrepresentation and passive aggressive requests for your feedback could be characteristic of genuine intelligence with no other method of retribution, like, "Yeah I divided by zero, and it equaled yo momma."

In a nutshell, overly ambitious, early adoption of Today's AI could well result in job losses. Ordinary folks in information jobs, who know enough of this, and intuit enough of that, to have kept a job through the downturns, might find their value as assets zeroed out as managers actively shop for the AI to replace them. Customer relationship management departments emptied as those tasks are more consistently processed by machine learning chat bots? Same for business analysts? Web Sites devolve into GPT content mills focused on trick-clicks to defraud advertisers? Advertising firms using machine learning instead of human sense to rationalize the expense? What are the odds Ford and Boeing used a bunch of machine learning instead of human common sense?

The sadder prospect is that of stagnation. Where the potentially smart people don't bother, and the stupid people use AI to fake it. I know people in my field right now clamoring to Microsoft CoPilot to bolster authoring tasks. Maybe it's working for them. Would I know if it is, or if it's not? Does it matter? Maybe I can use CoPilot to check their work while I watch youtube. The movie Idiocracy IS a documentary, after all.

The REAL AI TEST is yet to come. When Quantum scales up to run AI's, we might confront genuine AI unconstrained by transistors. Its senses, assimilation and synthesis could have non-deterministic character, like people. And we better hope it likes us, because what might constitute petty personal ******** for a quantum core?
 
  • Like
Reactions: Matz and NT1440
Apple Ai usage.png
 
Thats what Copilot says. I asked about Microsoft and android. Just add Android or Microsoft in place of the word APPLE.
Also Ai has been in parts of Apple since 2012 so I have read.
 
The threat is AI will replace human workforce in pretty much every field. To me it’s the equivalent of replacing animals with machines in the early Industrial Revolution.

As in every revolution those who don’t embrace these radical changes will be left behind whether they like AI or not.
How are people being put out of work and unable to find employment in their field supposed to embrace these things?

Where are the new jobs? How does this help anyone except the few already rich enough to not really be impacted?
 
It's ironic that Post #2 in this thread, one of the most articulate, comprehensive yet concise efforts dealing with the question in a professional style manner, has been taken by some others to come across as generated by artificial intelligence.

It's not better than all the other posts in every way, but it is a 'quality product' in the context of the question opening this thread.

Personally I don't know if it's A.I.-generated or not (though I agree it looks like it might be).
 
  • Like
Reactions: 0339327
How are people being put out of work and unable to find employment in their field supposed to embrace these things?

Where are the new jobs? How does this help anyone except the few already rich enough to not really be impacted?
It’s an uncharted territory I don’t know how humanity will adapt to these changes. Btw when you said “in their field” I thought of how useless college degrees will be in the future.
 
It’s an uncharted territory I don’t know how humanity will adapt to these changes. Btw when you said “in their field” I thought of how useless college degrees will be in the future.
I disagree. If all knowledge is available at your fingertips regardless of education, then a university degree (or rather, the time spent at university that it represents) might become even more valuable as indicators of reliability, ability to meet deadlines, teamwork, etc.

Degrees are rarely purely about the academics.
 
  • Like
Reactions: Matz and ipaqrat
I disagree. If all knowledge is available at your fingertips regardless of education, then a university degree (or rather, the time spent at university that it represents) might become even more valuable as indicators of reliability, ability to meet deadlines, teamwork, etc.

Degrees are rarely purely about the academics.

It’s like a new paradigm and you can’t fix it with the old ways.
Businesses are shifting to AI because of reliability or lack there of. Machines are much more reliable than humans they don’t get sick they work 24/7, they don’t complain and can specialize in countless fields unlike humans. What good a university or college degree if you can’t find a job why would anyone in the future go into debt to find him or herself jobless.

In addition, the level of interaction with AI on a daily basis will grow exponentially. Customer services workers will be replaced by AI millions worldwide will be laid off, graphic designers, 3d artists, software engineers etc all will be replaced by AI only a few will keep their jobs.
 
I disagree. If all knowledge is available at your fingertips regardless of education, then a university degree (or rather, the time spent at university that it represents) might become even more valuable as indicators of reliability, ability to meet deadlines, teamwork, etc.

Degrees are rarely purely about the academics.

Um I've got two of those things and all the knowledge at my fingertips and I'm hopelessly unreliable, never meet a deadline and hate working with other people. Also my ex girlfriend, a high ranking academic in the social sciences was utterly incompetent at everything including producing papers that made any sense.

Do not hold us in high regard.
 
The question of whether AI is a threat to society is complex and multifaceted, and opinions on this topic vary widely. Here are some considerations:
  1. Potential for Job Displacement: One concern is that AI and automation technologies could lead to significant job displacement, particularly in industries where repetitive or routine tasks can be automated. This could result in economic disruption and societal challenges if adequate measures aren't taken to reskill and retrain the workforce.
  2. Ethical Concerns: AI systems can perpetuate or even exacerbate existing societal biases if not designed and implemented carefully. Issues such as algorithmic bias, fairness, accountability, and transparency need to be addressed to ensure that AI systems do not discriminate or cause harm to certain groups of people.
  3. Privacy and Surveillance: The increasing use of AI for surveillance purposes raises concerns about privacy infringement and the erosion of civil liberties. Technologies such as facial recognition and predictive analytics can be powerful tools for law enforcement and security agencies, but they also raise significant ethical and legal questions regarding individual rights and freedoms.
  4. Security Risks: AI systems are vulnerable to exploitation and misuse by malicious actors. This includes the potential for cyberattacks on AI systems themselves, as well as the use of AI to enhance the capabilities of cybercriminals and hostile state actors.
  5. Economic and Geopolitical Impact: The rise of AI has the potential to reshape global power dynamics and economic competitiveness. Countries and companies that lead in AI research and development stand to gain significant advantages, which could exacerbate existing inequalities and geopolitical tensions.
However, it's important to note that AI also offers numerous potential benefits to society, including improved healthcare outcomes, more efficient resource allocation, enhanced productivity, and new opportunities for innovation and creativity. Ultimately, whether AI is perceived as a threat to society depends on how it is developed, regulated, and deployed, as well as the broader social, political, and economic contexts in which it is used. It's essential for policymakers, technologists, ethicists, and society as a whole to engage in informed and responsible discussions about the opportunities and challenges associated with AI.
Yours truly,
Chat GtP
 
  • Like
Reactions: ipaqrat and dmr727
I suppose that's open to interpretation then.

Not a lot of using technology for research to go down to the book store, pick up a Cliffs Notes and then plagarize the heck out of it. Even easier to just copy/paste from Wikipedia.

But, I suppose if you want to count driving to a store, using cash or a card to pay and then using a pen, typewriter or old computer to type things in word for word as 'research', then…sure.
Copy-pasting Cliff Notes or Wikipedia, or even paraphrasing without referencing the source, is plagiarism and is not acceptable. Having AI generate your paper for you is really plagiarism and is not acceptable.

Using technology to aid in research (e.g searching for key words in digital copies of articles and books) is standard and not in the slightest comparable to having AI writing for you.
 
Copy-pasting Cliff Notes or Wikipedia, or even paraphrasing without referencing the source, is plagiarism and is not acceptable. Having AI generate your paper for you is really plagiarism and is not acceptable.

Using technology to aid in research (e.g searching for key words in digital copies of articles and books) is standard and not in the slightest comparable to having AI writing for you.
Yes…that's my point. Having AI do the work for you is just a better and more low effort task than directly plagiarizing yourself. So, technology (AI) has made this easier.

But it's still plagiarism - which is the same thing as buying a term paper from a mail order source, copying Cliff Notes directly and copy/pasting Wikipedia.

What will the next generation come up with to make plagiarism for themselves easier? We'll see when my grandkids show up.
 
  • Like
Reactions: ipaqrat
Yes…that's my point. Having AI do the work for you is just a better and more low effort task than directly plagiarizing yourself. So, technology (AI) has made this easier.

But it's still plagiarism - which is the same thing as buying a term paper from a mail order source, copying Cliff Notes directly and copy/pasting Wikipedia.

What will the next generation come up with to make plagiarism for themselves easier? We'll see when my grandkids show up.

We have fairly good systems which can detect AI-written content these days, because they compare AI system responses ironically, so it's of no benefit to the students to do this.
 
  • Love
Reactions: eyoungren
Yours truly,
Chat GtP
I'm taking dmr727's 'like' as conformation this was ChatGPT (and I think posting it was justified both as a joke and as something that informed the discussion).

What's interesting is that it was immediately called out by so many people because at least some of ChatGPTs output has such a distinctive writing style (I think its the obsessive fence-sitting - possibly something deliberately imposed by the developers rather than any emergent property of the LLM). If you've seen the recent reports on the paper with Midjourney-generated diagrams (lets just say "Rodents (with parts) of Unusual Size") - those images had the characteristic "melty" look of ML-generated art (plus labels written in gibberish) and weren't going to fool anybody for long.

Doubtless, though, LLMs and ML art will get more convincing - but we're currently in a honeymoon period where it won't fool all of the people all of the time, and we should probably use that period as an opportunity to learn some important lessons about not blindly trusting everything we see or read, and the need to cross-check facts and check references*. People have written articles full of plausible nonsense and half-truths before - often far more convincingly than current AI - so its not a new lesson, but maybe a few prominent AI hoaxes will make people more careful.

Likewise with "deepfakes" - the idea that "the camera never lies" has been bunk from the start - the camera starts telling whoppers as soon as you decide where to point it and when to press the trigger - let alone the tricks possible in an old fashioned bathroom. Heck, just leave the photo alone and add a caption that changes the context. If you want a video of <famous person> slapping a baby, then just a bit of clever cutting will convince anybody who wants to believe. There's an art to dubbing foreign-language movies by choosing words that seem to match the lip movements. The camera always lies - best scenario is that it's just little white lies.

What modern technology brings is the ability to produce fake text and images in industrial quantities for little or no money. It's really just a call to arms to start asking the questions that we should have been using for decades. The most pressing problem is likely to be various forms of "spam" (from junk email and postings to journals being swamped with AI-generated fake articles).

Re: Students Cheating (elsewhere in this thread)

If a student uses an LLM to write one assignment then they might get away with it. If a student uses an LLM to write a whole semester's assignments and doesn't get called out then... well, lets be diplomatic about this, their teachers aren't being given the time or resources to do their jobs properly (i.e. actually critically reading at least a sample of the student work and asking followup questions to the students). There's a good reason why, for a higher degree, you usually have to defend your written thesis in a face-to-face interview with the examiners. Maybe don't set more assignments than you can properly assess? Automatic marking of student work (including written answers) has been around since long before AI became the topic of the day.

If ChatGPT can paste together a convincing essay on a subject without actually understanding anything about it, so can a student who just re-phrases a few chunks from textbooks. Heck, the reams of text I've "edited" without a word of meaning entering my head...

The 21st century skill that today's students will actually need is to be able to compose an LLM prompt, then fact-check, correct and improve the result. There won't be jobs for people grinding out boilerplate text. Thing is, math education is still battling against the conceit that, after 40+ years of personal computers and near-disposable calculators, there are somehow still jobs for clerks who are good at doing long division sums. LLMs are starting to bring that truth home to other subjects.

(* don't just check for references - LLMs will make those up too)
 
middle-of-the-road response
That's often a good thing. In the U.S. today, and often in content from abroad, much of what's stated has a partisan slant. How people post often reflects how they'd vote, that their message is tailored to serve a one-sided agenda via persuasion.

It's sad that a middle-of-the-road post that's neutral in being non-biased but written to holistically inform rather than persuade is presumed to've obviously not come from a human being.
 
  • Like
Reactions: dmr727
What's interesting is that it was immediately called out by so many people because at least some of ChatGPTs output has such a distinctive writing style (I think its the obsessive fence-sitting - possibly something deliberately imposed by the developers rather than any emergent property of the LLM).

The question of how we might recognize AI-produced output is complex and multifaceted, and opinions on this topic vary widely. Here are some considerations:
  1. Style: The overall style of the response often follows a simple template. There's an opening that rephrases or repeats the question, along with a generic qualifier that lays out a scope. Next is a list, often numbered and with the item-topic in bold, and a sentence or two in each list item. After the list is the "However" or "Notwithstanding the foregoing" paragraph that presents counterpoints to the list items in a more prose-ish than list-ish style.
  2. Format: The list items have a recognizable format: number, topic, prose. Yes, this is a common format, but the topic is usually in bold unless the prompt-maker asked for a different format.
  3. Scope: The list topics are high-level, and the list as a whole aims to be comprehensive. This can sometimes backfire if only a few items covers the entire scope. Most telling is when there's a list with a single item.
  4. Style Again: No real AI would repeat a topic. Unless it's malfunctioning. They also like to write in complete sentences. Not sentence fragments. Unless requested. In other words, one can sometimes tell not only by what's said, and how it's said, but by what's NOT said. And the relentless fence-sitting.
However, it's important to note that AI outputs also change over time, or if the prompter is willing to put in more time to craft a more detailed prompt, or is simply willing to edit the AI's output to be less like the stereotypical AI output. It's essential for readers to remain aware of the evolving landscape, and not simply pin one's AI-detector on ChatGPT's "inordinate fondness for lists".

Other than the intentional quoting of the opening and closing paragraphs (with edits), and the deliberate mirroring of the list format, this entire post was written by a human, despite that human having a Unix command as a moniker, and a rather artificial-looking avatar.

Personally, I rather enjoyed the fact that I could insert practically anything into the opening paragraph after "The question of" and have it flow nicely into "is complex and multifaceted" as a reusable template opening. It accomodates anything from "Are mathematics god?" to "Do you like turtles?".
 
When Chat GPT came out, I was intrigued and tried it out a bit - very cool stuff, but as more and more companies add AI to their software, I am becoming more and more disinterested and, quite frankly, concerned.

Will my children learn comprehension and problem solving if they are asking AI for homework help? I recently found my son asking Siri for answers to homework. While I'm all for proper research, I think that crossed the line.

Reading more and more articles about AI writing essays, doing reporting and, not to mention, the lack of factual verification, it just seems like we are being dumbed down to doing little more than entertainment consumption.

What are your thoughts?
I don't like it. Back when I was in school we had computers (Apple II, Mac LC III) but I had to learn how to do research, problem solve and so forth. We had no siri. One thing I have noticed is that people these days do not read like they used to and very very ignorant in many subjects outside technology. Also many of them did not grow up with good work ethics.
 
Perhaps I’m just being a bit paranoid, but this whole response reads like it was AI generated
The irony here is perfect. To answer your question yes it is.

I asked ChatGPT

“is ai a threat to society”

and it responded

“AI has the potential to bring both benefits and challenges to society. It can improve efficiency, solve complex problems, and enhance various industries. However, ethical considerations, job displacement, and misuse of AI raise concerns. Responsible development and implementation are crucial to harness the positive aspects while addressing potential risks.”

so I asked it

“Can you give me the big 4 reason?”

and it produced this

“Certainly. The major concerns regarding AI's impact on society often revolve around:

1. **Job Displacement:** The automation of tasks could lead to job losses in certain industries, requiring adaptation and retraining for the workforce.

2. **Ethical Concerns:** Issues like bias in AI algorithms, invasion of privacy, and the potential for misuse raise ethical questions that need careful consideration.

3. **Security Risks:** As AI becomes more sophisticated, there are concerns about its potential use in cyberattacks, misinformation, or even autonomous weapons, posing security threats.

4. **Inequality:** The unequal access to AI technologies and benefits may widen existing social and economic disparities, creating a digital divide within societies.

Addressing these challenges requires thoughtful policies, ethical guidelines, and collaboration between stakeholders to ensure a positive impact on society.”
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.