Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Rafterman

Contributor
Apr 23, 2010
7,267
8,809
I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
 

Shanghaichica

macrumors G5
Apr 8, 2013
14,725
13,245
UK
I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
Microsoft seem to be betting big on it. I think AI will be big however I didn’t think it threatens the iPad in any way.
 

AndyMacAndMic

macrumors 65816
May 25, 2017
1,112
1,676
Western Europe
I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
Did you interact with it? If not, you should give it a go. It is significantly better than current 'assistants' like Siri. Is it correct all the time? No. Are people correct all the time?

In my opinion it is much more than a toy. It is a very workable tool. People use it to write essays, pieces of code, musical compositions etc. I tested it and asked it to write some pieces of code in different computer languages, which it did admirably. After I made some remarks it rewrote the code to my wishes. Try that with Siri.
Is it perfect? No. But it is definitely a next step in AI.

To stay on topic: Does it make the iPad useless? Of course not. When used with care ChatGPT can be a very useful source of information (more like a search engine). Google should be worried (and they are). The iPad (or any other tablet, phone or computer) not so much.
 
Last edited:
  • Like
Reactions: Isamilis

sparksd

macrumors G3
Jun 7, 2015
9,992
34,268
Seattle WA
Did you interact with it? If not, you should give it a go. It is significantly better than current 'assistants' like Siri. Is it correct all the time? No. Are people correct all the time?

In my opinion it is much more than a toy. It is a very workable tool. People use it to write essays, pieces of code, musical compositions etc. I tested it and asked it to write some pieces of code in different computer languages, which it did admirably. After I made some remarks it rewrote the code to my wishes. Try that with Siri.
Is it perfect? No. But it is definitely a next step in AI.

To stay on topic: Does it make the iPad useless? Of course not. When used with care ChatGPT can be a very useful source of information (more like a search engine). Google should be worried (and they are). The iPad (or any other tablet, phone or computer) not so much.
What bothered me about it was that I entered a simple equation and it gave the wrong answer. I repeated with the exact same input and it gave the right answer.
 
  • Like
Reactions: AndyMacAndMic

CausticPuppy

macrumors 68000
May 1, 2012
1,536
70
Did you interact with it? If not, you should give it a go. It is significantly better than current 'assistants' like Siri. Is it correct all the time? No. Are people correct all the time?

In my opinion it is much more than a toy. It is a very workable tool. People use it to write essays, pieces of code, musical compositions etc. I tested it and asked it to write some pieces of code in different computer languages, which it did admirably. After I made some remarks it rewrote the code to my wishes. Try that with Siri.
Is it perfect? No. But it is definitely a next step in AI.

To stay on topic: Does it make the iPad useless? Of course not. When used with care ChatGPT can be a very useful source of information (more like a search engine). Google should be worried (and they are). The iPad (or any other tablet, phone or computer) not so much.
We're already using chatGPT at work to come up with complex splunk queries (yes it knows splunk), optimize SQL, write shell scripts, etc. You have to validate and test whatever it comes up with but it does save a lot of time.
 
  • Like
Reactions: AndyMacAndMic

Isamilis

macrumors 68020
Apr 3, 2012
2,191
1,074
We're already using chatGPT at work to come up with complex splunk queries (yes it knows splunk), optimize SQL, write shell scripts, etc. You have to validate and test whatever it comes up with but it does save a lot of time.
Also, for some extends, I can get better answer on howto/technical stuff than asking it in the forum. So, probably technical forum like MR and many others in Reddit will be replaced soon.
 

AndyMacAndMic

macrumors 65816
May 25, 2017
1,112
1,676
Western Europe
Also, for some extends, I can get better answer on howto/technical stuff than asking it in the forum. So, probably technical forum like MR and many others in Reddit will be replaced soon.
MR (like any forum) is not only about technical aspects. It is also a place for discussion between people about opinions. Unless people become extinct I don't see a computer AI replacing that.
 
Last edited:

jimmirehman

macrumors 6502a
Sep 14, 2012
519
384
I have witnessed that people in my university, and a lot of people on social media, have adopted ChatGPT into their workflow. These "co-pilots" are a crucial part of working and they live permanently on the side. You are constantly sending it ideas and editing the draft. For that to happen, you need a keyboard which makes the iPad without a keyboard useless for people working in 2023. Of course, people say you can use the iPad with a keyboard but at that point, you might as well use the MacBook instead.

What are your thoughts on iPad's place in a world of AI butlers, like ChatGPT?
What did you use to post this thread? Go home ChatGPT, you're drunk.
 

CausticPuppy

macrumors 68000
May 1, 2012
1,536
70
Also, for some extends, I can get better answer on howto/technical stuff than asking it in the forum. So, probably technical forum like MR and many others in Reddit will be replaced soon.
AI will have gone to the next level when you ask it a technical question and it responds with “have you tried doing a search?”
 
  • Haha
Reactions: BigMcGuire

Yebubbleman

macrumors 603
May 20, 2010
6,024
2,616
Los Angeles, CA
ChatGPT doesn't affect my use cases for owning and operating an iPad. I'd imagine this to be the case for most iPad users. Incidentally, I don't find myself needing to embrace our new AI overlords JUST YET.
 

Tagbert

macrumors 603
Jun 22, 2011
6,256
7,281
Seattle
Some people are missing the point here, OP is not saying that ChatGPT is making iPad obsolete completely as a device and that it doesn't have any purpose anymore, they are pointing to one specific use case where he thinks iPad might be obsolete.

However, they weren't exactly elaborate about what they exactly mean.
If that is what the OP meant, the absolutist tone of the original posting did not carry that distinction so a lot of the responses here are against that direct message without qualifications.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
I think we need to step back and look at this from the proper perspective. It seems that education may be coming into a crisis very quickly thanks to ChatGPT. Students already feel like they can't keep up with those that utilize AI. Maybe Apple needs to start developing better tools to help these students and their AI enhanced workflow. Maybe these student just need to move on to different tool providers. It's hitting education now, but soon it will be upon all of us in the workforce with the inevitability of a zombie horde approaching.
Just today I received new instructions from Uni for exams to mitigate ChatGPT: oral exam instead of written exam. Written exam without internet and only physical textbooks as aid (optional). We are even allowed to change exams style in mid term which is unheard of.

ChatGPT just bombed back exams methodology at least 30 years as research based exams are gone for now.

Exams are however easy to solve. Exams will be more expensive but we pass the bill on to the society or the students.

Learning, which is much more important, is a much worse problem because we have no way to track back where ChatGPT got the information from.

Who is happy with this progression? Students?
 

BenGoren

macrumors 6502a
Jun 10, 2021
502
1,427
Just today I received new instructions from Uni for exams to mitigate ChatGPT: oral exam instead of written exam. Written exam without internet and only physical textbooks as aid (optional). We are even allowed to change exams style in mid term which is unheard of.

ChatGPT just bombed back exams methodology at least 30 years as research based exams are gone for now.

Exams are however easy to solve. Exams will be more expensive but we pass the bill on to the society or the students.

Learning, which is much more important, is a much worse problem because we have no way to track back where ChatGPT got the information from.

Who is happy with this progression? Students?

Honestly?

You’re just barely scratching the very tippy-top of the iceberg.

ChatGPT can already pass the bar exam:


Granted, it’s not at the top of the class — but that’s first-and-foremost irrelevant and secondly just a matter of not much time at all.

Let that sink in for a moment.

We have robots that can pass the bar exam.

Let’s try to follow a bit down this rabbit hole.

On the one hand, this could be seen as a wonderful thing. Consider, for example, the overwhelming burden public defenders face; they could provide a great deal more help to their clients by leaning heavily on AI.

But this just accelerates things. Prosecutors will use AI to strengthen their own cases, and we now have an AI-on-AI arms race. And the judges and juries will soon be overwhelmed by this barrage of machine-powered legal arguments and turn to their own AI to distill everything down to easily-digestible sound bites so they won’t miss their afternoon tee-time. Not long after, as a cost-cutting measure, legislatures defund the entire court system and the police just file theire reports directly to an AI that tells the police which cell to shove the prisoner into — except, of course, in the name of blind justice, by this point we’ve already replaced the police with robots.

Will it play out exactly like that? Of course not.

But tell me with a straight face that, a year from now, no lawyers will be leaning heavily on AI. And tell me that, two years from now, anybody is going to be interested in legal advice from a non-AI-assisted lawyer. And that there won’t be “ask an AI lawyer” smartphone apps.

Now, expand this to every other professional academic position. Why should a company hire an architect if an AI can produce up-to-code plans? Already, you want a computer, not a radiologist, to check your X-rays for cancer. Programmers are already using AI to automate large chunks of their own jobs … what makes you think the rest of the job can’t be automated?

The world is going to be very, very, very different much sooner than any of us fully appreciate yet.

Even if large language models aren’t “really” thinking, even if they’re far from perfect, even if all the other objections apply.

I think the biggest factor people are missing … in hindsight, we can see this same radical change with machines doing “manual” labor. Today’s world is far, far different from that before the Industrial Revolution. But it took a loooong time for the early steam engines to be commercialized, and for their use to be widespread, and for each of the incremental improvements to become widely available.

In stark contrast, basically anybody already has access to ChatGPT, and all those same people in a month or three will have access to the next version; it’s like going from farmers using mules to plow one year’s fields to diesel tractors the next year to GPS-guided autopilot combines the year after — and not just one farmer, but all farmers everywhere across the globe.

We just don’t have any precedent for this scale of change. The mechanization of industry took centuries. Half a century ago, there was no such thing as desktop publishing. A quarter century ago, no smartphone. There are people today who were secretaries on track to retire about now with a gold watch; now, the mere concept of secretarial work is borderline incoherent. A decade ago, a certain company I know had an entire floor full of accountants doing accounting things; now, there’s just the top two from that time plus the front-desk receptionist — and I helped automate all of those jobs out of existence. Several years ago I was able to walk into an office supply store and walk out with a cheap CD that had PDFs that saved me from what not long before would have been at least a couple hours of billable time at a lawyer.

But all of that pales in comparison with what’s about to hit us now that we’ve hit the “turbo” button …

b&
 
  • Like
Reactions: Bob_DM and dogstar

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Honestly?

You’re just barely scratching the very tippy-top of the iceberg.

ChatGPT can already pass the bar exam:


Granted, it’s not at the top of the class — but that’s first-and-foremost irrelevant and secondly just a matter of not much time at all.

Let that sink in for a moment.

We have robots that can pass the bar exam.

Let’s try to follow a bit down this rabbit hole.

On the one hand, this could be seen as a wonderful thing. Consider, for example, the overwhelming burden public defenders face; they could provide a great deal more help to their clients by leaning heavily on AI.

But this just accelerates things. Prosecutors will use AI to strengthen their own cases, and we now have an AI-on-AI arms race. And the judges and juries will soon be overwhelmed by this barrage of machine-powered legal arguments and turn to their own AI to distill everything down to easily-digestible sound bites so they won’t miss their afternoon tee-time. Not long after, as a cost-cutting measure, legislatures defund the entire court system and the police just file theire reports directly to an AI that tells the police which cell to shove the prisoner into — except, of course, in the name of blind justice, by this point we’ve already replaced the police with robots.

Will it play out exactly like that? Of course not.

But tell me with a straight face that, a year from now, no lawyers will be leaning heavily on AI. And tell me that, two years from now, anybody is going to be interested in legal advice from a non-AI-assisted lawyer. And that there won’t be “ask an AI lawyer” smartphone apps.

Now, expand this to every other professional academic position. Why should a company hire an architect if an AI can produce up-to-code plans? Already, you want a computer, not a radiologist, to check your X-rays for cancer. Programmers are already using AI to automate large chunks of their own jobs … what makes you think the rest of the job can’t be automated?

The world is going to be very, very, very different much sooner than any of us fully appreciate yet.

Even if large language models aren’t “really” thinking, even if they’re far from perfect, even if all the other objections apply.

I think the biggest factor people are missing … in hindsight, we can see this same radical change with machines doing “manual” labor. Today’s world is far, far different from that before the Industrial Revolution. But it took a loooong time for the early steam engines to be commercialized, and for their use to be widespread, and for each of the incremental improvements to become widely available.

In stark contrast, basically anybody already has access to ChatGPT, and all those same people in a month or three will have access to the next version; it’s like going from farmers using mules to plow one year’s fields to diesel tractors the next year to GPS-guided autopilot combines the year after — and not just one farmer, but all farmers everywhere across the globe.

We just don’t have any precedent for this scale of change. The mechanization of industry took centuries. Half a century ago, there was no such thing as desktop publishing. A quarter century ago, no smartphone. There are people today who were secretaries on track to retire about now with a gold watch; now, the mere concept of secretarial work is borderline incoherent. A decade ago, a certain company I know had an entire floor full of accountants doing accounting things; now, there’s just the top two from that time plus the front-desk receptionist — and I helped automate all of those jobs out of existence. Several years ago I was able to walk into an office supply store and walk out with a cheap CD that had PDFs that saved me from what not long before would have been at least a couple hours of billable time at a lawyer.

But all of that pales in comparison with what’s about to hit us now that we’ve hit the “turbo” button …

b&
I am not against AI or use of AI not robotics and automation. There is a lack of people in many many occupations and the world is increasingly complex so we need help by AI. However, the use case you give is in terms of cancer diagnostics (it is what I teach by the way) are validated AI helper benchmarked against trained pathologist (I am in wet biomarker field rather than imaging). In that case pathologist and AI can learn from each other to better cancer diagnostics. Great. Now imagine that you remove the training of the pathologists. Are you a pathologist if you do not have the training to judge if the AI was correct? One analysis or opinion is seldom sufficient for large decisions.

I got my first job 40 years ago by automating my own mindless job so I could do other more valuable things for the company. The automation was a 10 line or so .BAT script.
 

BenGoren

macrumors 6502a
Jun 10, 2021
502
1,427
I am not against AI or use of AI not robotics and automation. There is a lack of people in many many occupations and the world is increasingly complex so we need help by AI. However, the use case you give is in terms of cancer diagnostics (it is what I teach by the way) are validated AI helper benchmarked against trained pathologist (I am in wet biomarker field rather than imaging). In that case pathologist and AI can learn from each other to better cancer diagnostics. Great. Now imagine that you remove the training of the pathologists. Are you a pathologist if you do not have the training to judge if the AI was correct? One analysis or opinion is seldom sufficient for large decisions.

I got my first job 40 years ago by automating my own mindless job so I could do other more valuable things for the company. The automation was a 10 line or so .BAT script.

But that’s just it.

Sure, the AI was bootstrapped by feeding it data from human pathologists. But by now we not only know that there’s no magic sauce that the humans provide, we know that there is information in the data that the computers can suss out that humans are completely unaware of, almost certainly including features that won’t ever “make sense” to humans.

So what you do is you train the AI not on the information provided by pathologists, but on the actual raw data. You don’t just feed the tagged X-ray images to the AI; you feed it all the medical history that the hospital network has on every patient. In addition to the images, you feed it lab work, you feed it all diagnoses of every condition, you feed it blood pressure, you feed it prescription refill records … you feed it everything.

Of course, the imagery is going to provide most of the information for a diagnosis. But I guarantee you that there’s some cluster of data that’s not on anybody’s radar that also correlates with cancer, and that’s going to let the AI uncover cases that no radiologist would ever notice. And that, in turn, will correlate with some feature on a scan that the radiologist will again be unaware of, so the AI re-reviews all stored images for that feature … which leads to follow-up diagnostics, and so on.

You yourself point to this: “One analysis or opinion is seldom sufficient for large decisions.” And that’s the whole point of “big data”: it automates the process of having teams of experts compare notes.

If you want a simpler case, think of chess. I remember the days when it was taken for granted that no computer would ever beat a grandmaster at chess — and, if it ever happened, it would be a sure sign of human-level intelligence in a computer. We now know otherwise, of course. But today, the thought that you need humans to “validate” a chess computer’s strategy is laughable. The chess computers are having tournaments amongst themselves that no human will ever be able to appreciate.

With ChatGPT passing the bar exam, we’re now at the stage where chess computers were when they started winning individual games in tournaments. They still didn’t have a chance at that time of winning a tournament, but they were running with the pack. Not long after, Deep Blue beat Kasparov. I won’t be surprised if, this time next year — and certainly no more than five years from now — ChatGPT (or some other AI) is consistently cranking out legal opinions that the lawyers who grade the bar exam admit are superior to their own. Will the humans still need to “validate” those machine-generated opinions?

The same will be true of every other academic discipline, sooner rather than later.

So … yeah. Worrying about students having ChatGPT write their essays for them is much ado about nothing. The entire professional class is about to face competition from robots that will make the threat Henry Ford’s assembly line posed to manufacturing jobs seem like a giant nothingburger.

b&
 

ChrisA

macrumors G5
Jan 5, 2006
12,918
2,170
Redondo Beach, California
You are right. If you know how these big transformer model work you know that can't possibly be "intelligent" But just maybe they can be very useful if they could make my poor attempts at spelling and grammar a closer match to the spelling and grammar of random on-line text. (disclosure, I used an online system to fix a half dozen errors in the preceding text.)

The problem is that the vast majority of people have no clue at all how something like GPT works. So, they assume it works like humans do, but maybe not quite as well.
 

BenGoren

macrumors 6502a
Jun 10, 2021
502
1,427
As to whether or not they’re “really” intelligent or not … Dijkstra made that irrelevance plain when he similarly questioned whether or not a submarine can swim.

And it overlooks the possibility that, perhaps, human writing closely parallels the process that GPT employs. After all, the simplified version of GPT is that it has “read” an awful lot of stuff, and, in response to a query, it attempts to predict the most likely next word. How many times have you weighed pondered considered struggled with your choice decision of the next word turn of phrase to use? ChatGPT may well be much more human than you appreciate.

But, again, what matters here is the end result: we have every reason to believe that, within rounding, you don’t need to take off your shoes to count the number of years left before a computer can do every desk job that currently requires a college degree. And do the job better than any human. In any field. Whether or not, “under the hood,” the AI achieves its result the same way that humans today to is as irrelevant as the fact that there aren’t really a couple hundred flesh-and-blood hay-eating horses galloping “under the hood” of your car.

The social upheaval that the Industrial Revolution caused will look like a raindrop in a puddle compared to the coming tsunami.

Whether that’s good or bad … I just have no clue. All I can see from here is that horizon, not far away, where machines are better at every white-collar job than the best human.

If I had to guess … initially, we all become conduits for the AI. At first, you use it every now and again to help you with a particularly tricky or tedious part of your job. Then the ratio between what you do yourself and what you delegate to the AI shrinks. Then management figures out that this is going on, and in a “cost-cutting” measure, fires 90% of the workforce, with the remaining 10% managing AIs. Then the remaining managers, suddenly overwhelmed with all this extra work, use AI to help them manage the AI … and, after one or two iterations of that, everybody is unemployed …

… but, before then, the economy is grinding to an halt with the mind-numbing unemployment. So … universal basic income? Or do we all pick strawberries and run electrical wiring in building construction until the AI figures out how to build robots to do that cheaper than humans currently do?

That’s what I mean: I just don’t see what happens when computers are better at thinking than humans.

Which they emphatically already are with respect to chess. And, considering how many law students, despite already being high-achieving scholars with years of eduction, flunk the Bar Exam on their first try, it’s safe to say that this is also the case with the law.

b&
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.