AI doesn't save time. It shifts work.
The expectation: AI takes over tasks and we have more free time. The reality: We work differently, not less.
The illusion of saving time
Studies show that experienced developers work 19% slower with AI. AI does not change the amount of work, but rather the nature of it. What this means for employees.
The five minutes are correct. The text is there. But then the real work begins. Are the facts correct? Does the tone match the brand? Are the formulations precise enough? Has the AI invented something that sounds plausible but is wrong?
Five minutes of generating turns into 30 minutes of checking, correcting, and adjusting. In addition, there is a problem that many underestimate: AI systems hallucinate. They invent facts that sound plausible but are wrong. References that do not exist. Statistics that no one has collected. Names of studies that never existed.
How often does this occur? That depends on the task. For general texts, the hallucination rate of current models such as GPT-5.2 is around 8 to 11%. With web search enabled, it drops to below 6%. That sounds acceptable, but when it comes to source citations, the picture is different: A study in the Journal of Medical Internet Research examined literature searches and found that GPT-4 invented 28.6% of all source citations. With this type of task, the problem persists even with newer models.
Every AI-generated content needs a plausibility check. This costs time that was not necessary before. The hour has not disappeared. It has shifted: from the keyboard to the head, from writing to deciding and verifying.
What the research shows
A randomized controlled study by METR examined this year how AI tools influence the productivity of experienced developers. The result is surprising: with AI support, developers took 19% longer than without it.
Even more remarkable is the perception gap: Before the study, participants expected a 24% acceleration. After the study, they still believed they had been 20% faster. The difference between feeling and measurement is 39 percentage points.
A study by the Asana Work Innovation Lab with over 9,000 knowledge workers confirms the pattern:
- 62% say AI output does not meet quality standards
- 55% had to completely redo work started by AI
- 65% report more coordination work due to AI
For intensive AI users with more than 20 hours per week, these numbers rise to 69%, 68% and 90%.
Work does not disappear, it migrates
AI does not change the amount of work, but its nature. I observe three shifts again and again:
From execution to control. Previously, employees completed tasks themselves. Today, they control how an AI completes the task. That sounds easier, but it is not. Writing good prompts, providing context, iterating results: This requires practice and precision.
From routine to quality control. Routine tasks are faster, but the review remains. Anyone who used to write a text themselves knew every sentence. Anyone who approves an AI text must question every sentence. Is this number correct? Does this source exist? Has the AI invented something that sounds convincing? The responsibility for the result remains with the human, and hallucinations make this responsibility more difficult.
From production to decision. AI provides options. Three text variants, five image suggestions, ten research results. Someone has to choose. This decision work is new and it is exhausting.
Three examples from everyday work
Text creation
Before: One hour of writing, thinking, formulating, discarding, restarting.
After: Five minutes of prompting, then 30 minutes of reading, evaluating, correcting, checking facts, adjusting tone, obtaining approval.
The work has shifted: from the creative process to editorial quality assurance.
Research
Before: Two hours of searching, reading, taking notes, comparing sources.
After: Ten minutes of questioning AI, then 60 minutes of evaluating results, verifying sources, tracking down hallucinations, identifying gaps, establishing connections yourself. Particularly tricky: The AI sometimes provides references that do not exist at all. Anyone who does not check this is citing phantom studies.
The work has shifted: from collecting to critical evaluation.
Visualization
Before: Hours of designing, creating variants, adjusting details.
After: Ten minutes of feeding the image generator, then one hour of viewing results, selecting the right one, formulating corrections, generating again, making the final selection.
The work has shifted: from designing to curating.
What this means for employees
Anyone who expects to do less through AI will be disappointed. Anyone who understands that the nature of work is changing can adjust to it.
New skills instead of less work. Prompting, evaluating, deciding: These are skills that many first have to develop. They do not replace the old skills, they are added to them.
Other fatigue. Routine work is monotonous, but predictable. Decision work is varied, but exhausting. Anyone who evaluates and selects AI results all day is differently tired in the evening than someone who has written all day themselves.
A recent study in the Annals of Neurosciences examined decision fatigue in the context of AI for the first time. The result: Long-term AI use correlates significantly with mental exhaustion and declining decision-making confidence. The assumption that AI reduces decision fatigue has not been confirmed.
The responsibility remains. If an AI text contains an error, it is not the AI’s fault. Responsible is who approved the text. The tool has changed, the accountability has not. The German Press Council clarified this last year: Editorial offices bear the press ethical responsibility for all contributions, regardless of the type of creation.
When AI actually saves time
There are cases in which AI brings real time savings. But they are more specific than often claimed:
For tasks with a clear right or wrong. Data formatting, code conversion, standard translations. Here, the quality check is quick because it is objectively measurable.
With high volume with the same pattern. Hundred product descriptions according to the same scheme. Here, the effort for a good prompt is worthwhile because it multiplies.
For tasks that were not done at all before. Anyone who previously did not create visualizations for customer meetings does not save time, but gains a new opportunity.
Plausibility checks as a new core competence
In its guidelines, the BSI warns: “AI systems are often generic and may also contain false but convincingly formulated information.” The University of Basel makes it clear in its guidelines: “The products of AI tools are not scientific sources.”
What does that mean in practice? Every AI output needs a check:
- Check named sources for existence
- Compare facts with reliable primary sources
- Pay attention to topicality, because AI training data is often one to two years old
- Check for plausibility with your own expertise
This review work is new. It costs time. And it requires competence that many first have to build up.
Realistic expectations instead of disappointment
The question is not: How much time do I save with AI? The better question is: Which work do I want to trade for which other work?
Anyone who likes to write but does not like to check will not be happy with AI text creation. Anyone who likes to decide but does not like to produce will love AI.
AI is not an efficiency miracle. It is a tool that redistributes work. Anyone who understands this can use it realistically. Anyone who expects time savings and gets decision load will be frustrated.
The honest balance: AI changes how we work. We do not work less because of it.
Frequently asked questions
Does AI really not save time? Yes, in certain cases. But less often than promised. The METR study shows: Experienced developers were 19% slower with AI, although they felt faster.
What are AI hallucinations and why are they problematic? AI systems sometimes invent facts, sources, or statistics that sound plausible but do not exist. For general texts, the error rate of current models such as GPT-5.2 is around 8 to 11%. The rate is significantly higher for source references. Therefore, all AI output needs to be checked for plausibility, especially when sources are cited.
Why does AI work still feel exhausting? Because decision work is cognitively stressful. Research shows: Long-term AI use correlates with mental exhaustion and decreasing decision self-confidence.
How do I deal with the decision load? Limit the options. Do not let the AI deliver ten variants, but three. Define criteria beforehand according to which you select.
When is AI really worthwhile? With high volume, clear patterns and objectively verifiable results. Or if you want to do things that you have not done before due to lack of capacity.
Will that get better the more I use AI? Yes. With practice, prompts become more precise and quality control faster. But the work does not disappear, it only becomes more efficient.
Would you like to understand how the work in your team is actually changing through AI?
We will find out together in a free initial consultation.
Post Tags :
Share :
AI productivity, AI expectations, work change, decision fatigue, AI quality control, AI reality, prompting, AI competence, AI hallucinations, plausibility check