I wish generative AI (genAI) tools were truly useful. They’re not. I keep tinkering with the programs — ChatGPT, Meta AI, Gemini, etc., etc. Mind you, they look useful if you don’t know any better. Their answers sound plausible. But if you look closer, even if you forgive them for their hallucinations — that is, lies — you’ll see all too often that the answers they give are wrong.
If you’re operating at, say, a high-school-grade report level, genAI answers are fine. (Sorry, teachers.) But if you’re digging deep into a subject, which is where I live, it’s another story.
I know more than the average large language model (LLM) about subjects such as Linux and open-source software. What genAI can tell you about those subjects might sound right, but the deeper you dive into the details, the poorer the information.