- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.
They are less useful than a Wikipedia search and a dictionary. They can functionally replace humans in 0 fields that were not already automatable by machines. They are useless in any situation that warrants any degree of caution about safety.
85-90% is way over-estimated, it gets significantly worse dealing with specific tasks. And even if it was 85-90%, that’s not good enough, even remotely, for just about anything. Humans make errors too, but inconsistently and inversely proportional to experience. This makes no difference to the LLM though, it will always make errors at that exact rate. The kinds of errors it can make are also not just missteps but often pure delusion and very far from what the input was requesting. They cannot reason. They have no rationale. They’re imitation in its most empty form. They cannot even so much as provide information reliably.
They also ruin every single industry they come into contact with, and even worse they have utterly destroyed the usability of the internet. LLMs are a net negative for humanity in so many different ways. They deserve as much attention and investment as chatbots did back in 2005.
Their best use case scenario is in churning out an endless amount of lifeless soleless jpg background noise and word salad articles. Their best use case is in tricking people into giving them money or ad revenue. Scamming is the only thing they are anywhere near functionally useful for.