Sorry, a longer post, I'm playing catch-up.
Overall, I think ChatGPT decent for what it can do, but it's not supposed to be able to do everything and a lot of the negativity appears to come from people not actually using it for what it's designed for.
I see a lot of people very proud of tricking it into proven it's not a human being, even though when asked for a personal opinion it tells you it's not a person, so why do people think it's trying to trick them?
It tells you up front that it's not great at math, just ask it (or read the intro). Proving it's not great at math isn't that big a win.
Of course, for anything important you always need to double- or triple-check it's facts, but you really need to do that with Wikipedia and even proper medical information sites; there's a reason why you should usually get a second option from a human medical doctor when you're dealing with something serious.
Is it as smart as a 5-year old? I guess, if the 5-year old speaks multiple languages fluently, can translate between them, and has the ability to write high school and university exams and maintain a pretty solid "B" in most subjects, except math (though it got a "B" in astrophysics). It has a better knowledge base than most of the people I know, on almost any topic. So if you want to chat with someone polite and well-spoken that knows more about almost anything the most people you know, why not?
Does it understand the timeline because it doesn't know Avatar 2 is out? Yes. This isn't a "gotcha", it's dataset stops at 2021, so when you ask the question, knowing that basic fact, you can expect timeline-related answers to be answered as if it was 2021.
"Stress-testing" it and finding something you prove is inadequate doesn't make it useless; that's like stress-testing a dozen cars and trucks by into walls and saying you're never going to use them because the can be driven into walls -- by choice or by accident, tossing aside all the things they do really, really well if you operate them safely.
I retired in June after working in a small DTP department with an open office design; during the day we'd often chat about various topics while we waited on our computers and in the last few years we were down to a half-dozen people, and chatting with them provided entertaining and educational. I worked for a translation company, and probably worked with people from almost every country in the world at one time or another. During conversations our topics ranges far and wide, and not everyone pretended to know everything, so there's be honest discussions and cross-leaning.
After the tests I've done so far, I view ChatGPT as an intelligent, well-versed, well-spoken co-worker. I find I can ask it for general information on almost any topic and get a clear, condensed answer that, if it's good enough, I can ask it to elaborate or where to get more detailed information. I asked it what would be the best planets or moons in the Solar system for humans to terraform, and it gave me a very solid answer, adding what general methods might be used to terraform each, and so on.
Do I want to use it to replace human interaction? Heck no, but that's not what it's for, really. Still, I'd rather chat with this than most of the random folk on the internet.
I haven't seen anything that it's done that I'd consider any worse than I've seen thousands of people on the internet doing and overall I think it's behaved better.
I doubt I'd ever ask it to operate anything in my home, but to me, that's not what I'd want it for.
I'd love to have it running on an iPhone or iPad next to my computer while I'm working on personal projects, with a 3D animated avatar, using Whisper technology and artificially-generated voices to allow me to talk to it while I work, possibly with different versions and "personalities" depending on the day of the week or by what I was doing.