Not really, it makes mistakes that people wouldn't make. For instance, I asked it to name the chapters of a Classical Japanese book. A person who was asked that question would list the chapter names they know, then the ones they are unsure of with a comment that they are unsure, then admit they don't know the rest. ChatGPT very confidently listed the chapter names, some of which were correct, and others totally made up. People just don't do that kind of thing. They know what they know and what they don't know, and they know it's good practice to admit what they don't know.
ChatGPT doesn't "know" anything. It's basically a more complicated version of autocomplete, just stringing together words that are statistically likely to follow each other. It's impressive that it's able to form coherent and grammatical sentences in this way. But there is no guarantee what it says is accurate. The problem is because its output is coherent and grammatical, people assume the content is factual. Which it sometimes is and sometimes isn't. And there's no easy way to tell when something it tells you is inaccurate.