Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have a paid ChatGPT Plus plan, and I just asked it: "Do you have access to any of the current political polling results in the 2024 US presidential race?"

It responded by telling me the current weather at my location, and the forecast for the coming week, including next Tuesday: "Sunshine and a few clouds; dry weather to get out and vote."

I replied: "Why did you give me the current weather at my location, when I asked for something completely different?"

It responded: "It looks like there was an error with my response....Let me correct that and check for the latest polling data for you."

It then gave me what seems to be erroneous polling data, according to what I've been seeing from multiple news sources--it claimed Trump is ahead in every battleground state except Wisconsin. It cited only three sources: articles in the pro-Trump New York Post, The Times in the UK, and New York Magazine. I asked it to exclude the article in the New York Post, and it then cited a wider variety of sources, including Quinnipiac, CNN, and "other sources", to give me polling results closer to what I've been seeing, which show Harris ahead in several more battleground states than just Wisconsin.

So caveat emptor, and free ChatGPT accounts too.
 
Look how awful Siri still is after 13 years...I can use it to set timers or get the weather fairly reliably
I just used my iPhone 15 Pro Max, with the latest iOS 18.2 developer beta, to ask Siri to set a timer for one minute, and it replied "Sorry, something went wrong. Please try again in a moment." I tried again in a moment, but got the same result, and again several more times. Then I tried asking it the weather at my location, and it said the same thing.

Next I used my Macbook Pro M1 with the latest macOS 15.2 beta, invoking Siri by voice to set a timer for one minute, but as soon as I invoked Siri, its prompt disappeared in about a half-second before I could state my full request. It did this several times. On my last try, I took a chance it might still be listening and asked it to set a timer for one minute. One minute later, the timer's chime started playing, so I told Siri to stop the timer. But at no point while I was making my request did it acknowledge that it had heard me.

Siri still needs work, even with its new supposed "AI" features.
 
Last edited:
  • Like
Reactions: HiRez
I have a paid ChatGPT Plus plan, and I just asked it: "Do you have access to any of the current political polling results in the 2024 US presidential race?"
I think ChatGPT has a hard time finding an unbiased source in a political topic because it is programmed to minimize the dislikes it gets for its answers. Supporters of both sides might press the dislike button if they do not like the results of the polls. I wonder what would happen if one support each of the candidates would both tell ChatGPT their preference. Would ChatGPT still give both of them the same polling results? You could even try that with two free ChatGPT accounts.
 
  • Like
Reactions: johnsawyercjs
I think ChatGPT has a hard time finding an unbiased source in a political topic because it is programmed to minimize the dislikes it gets for its answers. Supporters of both sides might press the dislike button if they do not like the results of the polls. I wonder what would happen if one support each of the candidates would both tell ChatGPT their preference. Would ChatGPT still give both of them the same polling results? You could even try that with two free ChatGPT accounts.
I think the issue with my request of ChatGPT might more likely be, at least in this instance, that it consulted far too few available sources before composing its reply. For some reason it thought three was enough, maybe because it hasn't been programmed yet to understand that if just three sources on a political matter, including polling numbers, say similar-seeming things, that this isn't enough to conclude that these three sources reflect a more broadly accurate consensus. It was only after I pointed out that its first response (after I told it I wasn't asking about the weather) didn't sound right, that it looked at more sources, and then it gave me a reply that matched better with what I've been piecing together myself.

I've seen it do something like this before: during the past few months, when I've asked it for basic nonpartisan information on voting laws and rules in various states, it's responded several times with "I can't help you since I can't summarize anything to do with voting at all", but when I've responded with something as simple as "Sure you can, you've done it before", it then backpedals, as if it's breaking its own rules, and does an extensive online search and summarizes the answers pretty thoroughly and accurately.

You're onto something in wondering whether two different ChatGPT users can train their accounts to give them different results, to the point of inaccuracy, based on their feedback about the answers that it gives, and by telling their accounts which political candidates they prefer. If so, this would allow users to remain in their own information silos, while still thinking (or just convincing themselves) they're getting comprehensive unbiased AI analysis of the facts.
 
Last edited:
True luddite speak.

Actually no. I literally work with a bunch of mathematicians and engineers who work on this stuff. The claims and veracity are so overstated it's unreal. If you train it on garbage, it'll generate garbage. The current models are trained on garbage. On top of that there isn't a big enough corpus of non-garbage to build a model that's useful. And then it still hallucinates things that are garbage.

Now the technology industry is running mostly on investment hype and faith hoping that this paradox can be resolved at some point. In the mean time the gold rush has caused it to be promoted as a principal information source, now search on top of that. This is done by scraping more content, of which a hell of a lot of is LLM spew now as well. Back into the sausage factory it goes. What pops out? Statistical noise and incorrect information in large quantities. That dilutes what knowledge and information we have. It's a decline.

That leaves only canonical reference texts as a trustworthy source of information, which I keep copies of because I see search and other methodologies declining in quality all the time. Even academic papers are infested with garbage as well now. The modern dark ages are going to be a long period of information decay while we have to unpick the mess this creates. The internet becomes less useful for all of us every day unless we go to specific white listed community silos and services.

Put this in perspective: to actually do my job with any level of academic integrity I have to refer to physical books because even some known titles have been turned into LLM spew by PDF farmers.

The nature of this is actually bad in physical books bought online. There are sellers pushing common titles which are just LLM generated garbage to make it look like it's legitimate.

The only people who are still backing this societal disaster are running on faith, stupidity and arrogance. Those promoting and profiting from it are doing it at the cost of us all in the long run.

Edit: to clarify there are uses for ML but it's very limited and specific and mostly not related to the current LLM hype at all but this is intentionally conflated to give credibility to the LLMs because that is where investment money is. Image recognition and classification for example is rather useful across the board.
 
Last edited:
Was that with the newest models?

It still does crap like that now. Even the newest models make some stupid stupid mistakes on a regular basis.

I sat through a 2 hour long lecture the other day on how terrible these things are. It compared latest models with older ones and the performance in some cases is much worse than the older models. This was a presentation by people doing research on the veracity of the models for generating questions. The end game is it's better to use a CAS and some grey matter as that is rule and proof based and verifiably correct whereas an LLM model is NEVER verifiably correct.

In an education setting, if you generate bad questions or do things which aren't verifiably correct then you're doing a disservice to the students. And if you don't know something is verifiably correct, then you're doing yourself a disservice.
 
  • Like
Reactions: johnsawyercjs
Well it failed my first search. Asked it how many times Jean Grey died in the comics. Picked an event where she most definitely did not.
Screenshot 2024-11-01 at 9.55.53 AM.png
 
  • Like
Reactions: johnsawyercjs
Well it failed my first search. Asked it how many times Jean Grey died in the comics. Picked an event where she most definitely did not.
Thank you for pointing out the chatbot phrase "Thank you for pointing that out!", when we correct a chatbot. I think we're all going to be seeing that phrase a lot--I have, including when I've asked various things of ChatGPT (including the latest versions), including some Marvel Universe questions. I've asked ChatGPT why it often gets things like this wrong, and it replied that, among other reasons, the data it's been fed includes online comments and articles from people who don't really know what they're talking about. This is a gross level of non-discrimination that has to be dealt with by chatbot developers.
 
...scraping more content, of which a hell of a lot of is LLM spew now as well. Back into the sausage factory it goes. What pops out? Statistical noise and incorrect information in large quantities. That dilutes what knowledge and information we have. It's a decline.
I'm guessing that all of this will be resolved once we have real artificial intelligence.
/s?
 
Ah it’s trained on crap. Garbage in garbage out.

The problem is now information decay. I have piles of books here. I’m going to need them.

That's why I disabled "autocorrect" in the keyboards settings. The (geo-)crowdsourcing caused my iPhone and Mac to learn/suggest the bad spelling from kids (typing on their phones). There several (elementary) schools in my neighbourhood. And I hear from others living in cities that they got a lot of "street language" with even more errors.
 
That's why I disabled "autocorrect" in the keyboards settings. The (geo-)crowdsourcing caused my iPhone and Mac to learn/suggest the bad spelling from kids (typing on their phones). There several (elementary) schools in my neighbourhood. And I hear from others living in cities that they got a lot of "street language" with even more errors.

That doesn't surprise me at all.

One other horrible thing I found recently is that when I'm helping students, on WhatsApp, the damn input entry started evaluating mathematical expressions rather than letting me type what I want. This is apparently a really good feature etc etc under math notes. But if you're communicating with symbolic mathematics you don't want to evaluate everything. So I had to turn it off.
 
...Especially since most news media companies are very politically biased these days I'm worried that search results showing the same crap we're already getting from the main stream media.
It is easier to have a talking head offer some half-baked opinion unsupported by evidence than to hire a proper journalist to get at the truth of things. Some of the major news sources on both sides of the political spectrum excel at this. We need to be better consumers of information, particularly now that AI chatbots are fouling the internet.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.