Microsoft seem to be betting big on it. I think AI will be big however I didn’t think it threatens the iPad in any way.I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
Did you interact with it? If not, you should give it a go. It is significantly better than current 'assistants' like Siri. Is it correct all the time? No. Are people correct all the time?I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
My husband said he wrote a book using ChatGTP. I’ve not read the book but it exists.I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
"Offensive", lol.I've read a few reviews and it seems chatgpt is mostly a novelty and is wrong a lot, and can even be offensive sometimes. Its just a toy right now.
"Offensive", lol.
What bothered me about it was that I entered a simple equation and it gave the wrong answer. I repeated with the exact same input and it gave the right answer.Did you interact with it? If not, you should give it a go. It is significantly better than current 'assistants' like Siri. Is it correct all the time? No. Are people correct all the time?
In my opinion it is much more than a toy. It is a very workable tool. People use it to write essays, pieces of code, musical compositions etc. I tested it and asked it to write some pieces of code in different computer languages, which it did admirably. After I made some remarks it rewrote the code to my wishes. Try that with Siri.
Is it perfect? No. But it is definitely a next step in AI.
To stay on topic: Does it make the iPad useless? Of course not. When used with care ChatGPT can be a very useful source of information (more like a search engine). Google should be worried (and they are). The iPad (or any other tablet, phone or computer) not so much.
It's trained on human-generated text, and humans tend to occasionally generate racist text, so there you go.Yes, it has occasionally spit out some racist language.
We're already using chatGPT at work to come up with complex splunk queries (yes it knows splunk), optimize SQL, write shell scripts, etc. You have to validate and test whatever it comes up with but it does save a lot of time.Did you interact with it? If not, you should give it a go. It is significantly better than current 'assistants' like Siri. Is it correct all the time? No. Are people correct all the time?
In my opinion it is much more than a toy. It is a very workable tool. People use it to write essays, pieces of code, musical compositions etc. I tested it and asked it to write some pieces of code in different computer languages, which it did admirably. After I made some remarks it rewrote the code to my wishes. Try that with Siri.
Is it perfect? No. But it is definitely a next step in AI.
To stay on topic: Does it make the iPad useless? Of course not. When used with care ChatGPT can be a very useful source of information (more like a search engine). Google should be worried (and they are). The iPad (or any other tablet, phone or computer) not so much.
Also, for some extends, I can get better answer on howto/technical stuff than asking it in the forum. So, probably technical forum like MR and many others in Reddit will be replaced soon.We're already using chatGPT at work to come up with complex splunk queries (yes it knows splunk), optimize SQL, write shell scripts, etc. You have to validate and test whatever it comes up with but it does save a lot of time.
MR (like any forum) is not only about technical aspects. It is also a place for discussion between people about opinions. Unless people become extinct I don't see a computer AI replacing that.Also, for some extends, I can get better answer on howto/technical stuff than asking it in the forum. So, probably technical forum like MR and many others in Reddit will be replaced soon.
What did you use to post this thread? Go home ChatGPT, you're drunk.I have witnessed that people in my university, and a lot of people on social media, have adopted ChatGPT into their workflow. These "co-pilots" are a crucial part of working and they live permanently on the side. You are constantly sending it ideas and editing the draft. For that to happen, you need a keyboard which makes the iPad without a keyboard useless for people working in 2023. Of course, people say you can use the iPad with a keyboard but at that point, you might as well use the MacBook instead.
What are your thoughts on iPad's place in a world of AI butlers, like ChatGPT?
What are your thoughts on iPad's place in a world of AI butlers, like ChatGPT?
AI will have gone to the next level when you ask it a technical question and it responds with “have you tried doing a search?”Also, for some extends, I can get better answer on howto/technical stuff than asking it in the forum. So, probably technical forum like MR and many others in Reddit will be replaced soon.
I'm wondering how long until they short-circuit it with "have you tried turning it off and back on...?"AI will have gone to the next level when you ask it a technical question and it responds with “have you tried doing a search?”
If that is what the OP meant, the absolutist tone of the original posting did not carry that distinction so a lot of the responses here are against that direct message without qualifications.Some people are missing the point here, OP is not saying that ChatGPT is making iPad obsolete completely as a device and that it doesn't have any purpose anymore, they are pointing to one specific use case where he thinks iPad might be obsolete.
However, they weren't exactly elaborate about what they exactly mean.
Just today I received new instructions from Uni for exams to mitigate ChatGPT: oral exam instead of written exam. Written exam without internet and only physical textbooks as aid (optional). We are even allowed to change exams style in mid term which is unheard of.I think we need to step back and look at this from the proper perspective. It seems that education may be coming into a crisis very quickly thanks to ChatGPT. Students already feel like they can't keep up with those that utilize AI. Maybe Apple needs to start developing better tools to help these students and their AI enhanced workflow. Maybe these student just need to move on to different tool providers. It's hitting education now, but soon it will be upon all of us in the workforce with the inevitability of a zombie horde approaching.
Just today I received new instructions from Uni for exams to mitigate ChatGPT: oral exam instead of written exam. Written exam without internet and only physical textbooks as aid (optional). We are even allowed to change exams style in mid term which is unheard of.
ChatGPT just bombed back exams methodology at least 30 years as research based exams are gone for now.
Exams are however easy to solve. Exams will be more expensive but we pass the bill on to the society or the students.
Learning, which is much more important, is a much worse problem because we have no way to track back where ChatGPT got the information from.
Who is happy with this progression? Students?
I am not against AI or use of AI not robotics and automation. There is a lack of people in many many occupations and the world is increasingly complex so we need help by AI. However, the use case you give is in terms of cancer diagnostics (it is what I teach by the way) are validated AI helper benchmarked against trained pathologist (I am in wet biomarker field rather than imaging). In that case pathologist and AI can learn from each other to better cancer diagnostics. Great. Now imagine that you remove the training of the pathologists. Are you a pathologist if you do not have the training to judge if the AI was correct? One analysis or opinion is seldom sufficient for large decisions.Honestly?
You’re just barely scratching the very tippy-top of the iceberg.
ChatGPT can already pass the bar exam:
Granted, it’s not at the top of the class — but that’s first-and-foremost irrelevant and secondly just a matter of not much time at all.
Let that sink in for a moment.
We have robots that can pass the bar exam.
Let’s try to follow a bit down this rabbit hole.
On the one hand, this could be seen as a wonderful thing. Consider, for example, the overwhelming burden public defenders face; they could provide a great deal more help to their clients by leaning heavily on AI.
But this just accelerates things. Prosecutors will use AI to strengthen their own cases, and we now have an AI-on-AI arms race. And the judges and juries will soon be overwhelmed by this barrage of machine-powered legal arguments and turn to their own AI to distill everything down to easily-digestible sound bites so they won’t miss their afternoon tee-time. Not long after, as a cost-cutting measure, legislatures defund the entire court system and the police just file theire reports directly to an AI that tells the police which cell to shove the prisoner into — except, of course, in the name of blind justice, by this point we’ve already replaced the police with robots.
Will it play out exactly like that? Of course not.
But tell me with a straight face that, a year from now, no lawyers will be leaning heavily on AI. And tell me that, two years from now, anybody is going to be interested in legal advice from a non-AI-assisted lawyer. And that there won’t be “ask an AI lawyer” smartphone apps.
Now, expand this to every other professional academic position. Why should a company hire an architect if an AI can produce up-to-code plans? Already, you want a computer, not a radiologist, to check your X-rays for cancer. Programmers are already using AI to automate large chunks of their own jobs … what makes you think the rest of the job can’t be automated?
The world is going to be very, very, very different much sooner than any of us fully appreciate yet.
Even if large language models aren’t “really” thinking, even if they’re far from perfect, even if all the other objections apply.
I think the biggest factor people are missing … in hindsight, we can see this same radical change with machines doing “manual” labor. Today’s world is far, far different from that before the Industrial Revolution. But it took a loooong time for the early steam engines to be commercialized, and for their use to be widespread, and for each of the incremental improvements to become widely available.
In stark contrast, basically anybody already has access to ChatGPT, and all those same people in a month or three will have access to the next version; it’s like going from farmers using mules to plow one year’s fields to diesel tractors the next year to GPS-guided autopilot combines the year after — and not just one farmer, but all farmers everywhere across the globe.
We just don’t have any precedent for this scale of change. The mechanization of industry took centuries. Half a century ago, there was no such thing as desktop publishing. A quarter century ago, no smartphone. There are people today who were secretaries on track to retire about now with a gold watch; now, the mere concept of secretarial work is borderline incoherent. A decade ago, a certain company I know had an entire floor full of accountants doing accounting things; now, there’s just the top two from that time plus the front-desk receptionist — and I helped automate all of those jobs out of existence. Several years ago I was able to walk into an office supply store and walk out with a cheap CD that had PDFs that saved me from what not long before would have been at least a couple hours of billable time at a lawyer.
But all of that pales in comparison with what’s about to hit us now that we’ve hit the “turbo” button …
b&
I am not against AI or use of AI not robotics and automation. There is a lack of people in many many occupations and the world is increasingly complex so we need help by AI. However, the use case you give is in terms of cancer diagnostics (it is what I teach by the way) are validated AI helper benchmarked against trained pathologist (I am in wet biomarker field rather than imaging). In that case pathologist and AI can learn from each other to better cancer diagnostics. Great. Now imagine that you remove the training of the pathologists. Are you a pathologist if you do not have the training to judge if the AI was correct? One analysis or opinion is seldom sufficient for large decisions.
I got my first job 40 years ago by automating my own mindless job so I could do other more valuable things for the company. The automation was a 10 line or so .BAT script.