• Merca2.0
  • Posts
  • AI Was Supposed to Save Us Time. It Didn't

AI Was Supposed to Save Us Time. It Didn't

From fact-checking hallucinations to SUVs that second-guess drivers, AI often complicates life more than it simplifies it.

There are two primary perspectives on AI in the workplace. One is about job destruction—robots and AI will take jobs and leave us all like Matt Damon in Elysium. The other camp sees AI as the future of prosperity, freeing us from soul-wrenching tasks. However, not much has been said about how AI has made our lives more difficult.

Let's start with the simple facts. Every answer provided by ChatGPT, Grok, Gemini, or Perplexity is the result of numerous sources. This means we are working with a "wisdom of the crowds" mentality. Just see how many results are driven by a dozen websites. Fact-checking is actually harder, not easier, and answers are not always correct. In fact, according to the Columbia Journalism Review:

"Collectively, they provided incorrect answers to more than 60 percent of queries. Across different platforms, the level of inaccuracy varied, with Perplexity answering 37 percent of the queries incorrectly, while Grok 3 had a much higher error rate, answering 94 percent of the queries incorrectly."

“Most of the tools we tested presented inaccurate answers with alarming confidence, rarely using qualifying phrases such as “it appears,” “it’s possible,” “might,” etc., or acknowledging knowledge gaps with statements like “I couldn’t locate the exact article.” ChatGPT, for instance, incorrectly identified 134 articles, but signaled a lack of confidence just fifteen times out of its two hundred responses, and never declined to provide an answer.”

Traditional search engines returned a list of sources, and the user would then choose one. Selecting reliable information was both easier and up to the user. Generative search tools, on the other hand, parse and repackage information themselves, cutting off traffic to original sources. These chatbots' conversational outputs often hide serious underlying issues with information quality. This makes the job of vetting and media literacy much harder. Parsed information is a bundle where quality is hard to determine; in the best-case scenario, you get a good answer.

This is especially problematic in topics like health. Perhaps AI creates a great diagnostic, but maybe it is just the average of existing content.

This is only one example of many. Our daily tasks can become more challenging when assisted by AI. Take what Renault has done with the driving modes of the new Koleos SUV. According to the carmaker:

"The AI driving mode on the Renault Koleos is an adaptive feature that automatically adjusts the vehicle's driving parameters, such as steering and engine response, to suit your current driving style and conditions. It selects the most appropriate driving mode from options including Comfort, Eco, Sport, Snow, and Off-road to optimize the driving experience based on real-time data."

The problem is that this assumes the system knows what the user actually wants or needs. In an emergency, the system could make a choice detrimental to the driver. AI is supposed to make life easier for consumers, but it has shown that's not always the case. One consequence is significant AI bias in decision-making. The tools can have adverse effects, guiding consumers away from what they actually need—and this is not necessarily the AI's fault.

As users, we often fail to describe what we really need. The answer reflects the quality of the question. We leave too much to chance when in reality some extra effort could yield better results the first time around. It seems that getting an answer from AI is often harder than simply researching or creating something original in the first place.

Another reality is that consumers and companies often ask AI for help and get very similar results. The work is, in the best cases, generic. AI certainly helps the creative process but can also make us lose valuable time double-checking and second-prompting answers.

When Good AI Makes Us Worse

Fabrizio Dell'Acqua, a postdoctoral researcher at Harvard Business School, highlights a paradox: stronger AI tools can actually weaken human performance. In his experiment, 181 professional recruiters were asked to evaluate 44 résumés, judging candidates' math ability based on hidden test scores. Some recruiters had no AI help, others had "bad" AI, and a third group had access to "good" AI.

The results were striking. Recruiters with high-quality AI leaned too heavily on it. They worked less, thought less, and blindly followed the system's recommendations—missing exceptional candidates and failing to improve over time. In contrast, those with weaker AI support were forced to stay alert. They questioned the AI's output, refined their own judgment, and actually got better at the task.

Dell'Acqua built a model to explain this trade-off: the better the AI, the less incentive humans have to think critically. He calls this "falling asleep at the wheel." The lesson is clear—AI should be used as a tool, not a crutch. Otherwise, the very systems designed to help us may erode our skills, slow our learning, and amplify costly mistakes.

It seems that AI can catapult us into a future where more cognitive effort will be required—not for the tasks we ask of AI, but in evaluating the results. Critical thinking and error detection may well be the new blue-collar jobs.