According to Microsoft, employees who use AI tools like Copilot or ChatGPT experience “long-term reliance and diminished independent problem-solving,” which is harming their ability to think critically at work.

Microsoft claims that using generative AI in the workplace may affect workers’ capacity for critical thought.
Concerns regarding the effects of generative AI on our minds have been raised by researchers at Microsoft and Carnegie Mellon University who polled 319 knowledge workers in an effort to understand how the technology affects the workplace.
Researchers pointed to the “deterioration of cognitive faculties that ought to be preserved,” indicating that worries about the detrimental effects are legitimate, the report stated.
That cited studies on how automation affects human labor, which discovered that when workers are denied the chance to exercise their judgment, their cognitive abilities become “atrophied and unprepared” to handle situations that are outside of their daily routine.
Reduced memory and smartphones, as well as attention spans and social media users, have also been found to have similar consequences.
“Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving,” researchers reported.
According to the study, users primarily used critical thinking to verify the quality of their work, and workers were less inclined to apply their own critical thinking to their work the more confident they were in the generative AI tool in issue.
“When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship,” the study revealed.
More research is required on the topic, according to researchers, particularly since generative AI technologies are always changing and altering our interactions with them.
They urged generative AI developers to use their own data and telemetry to learn how these tools may “evolve to better support critical thinking in different tasks.”
“Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows,” the researchers added. “To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.”
Using AI tools too much could be problematic.
All of this is problematic since Microsoft has integrated its AI-powered Copilot capabilities into its broader software suite, which is the industry standard. However, some employees are also smuggling it into their organizations without permission.
One of the frequently stated presumptions about AI is that, in addition to reducing expenses, it may eliminate repetitive chores from daily employment, allowing workers to focus on more creative endeavors and less laborious work.
Finding the ideal balance between totally automated, human-in-the-loop, and fully human tasks is necessary to achieve that.
According to Stanford research, employees are more productive and efficient when they collaborate with an AI assistant. However, we can quickly become overly dependent on these tools, which can lead to conformity or an excessive amount of confidence in the technology.