Is AI Killing Critical Thinking? Researchers Have Some Answers!
At the end of one of our recent full-day AI workshops, a senior manager said, “I’m worried that with AI doing so much, junior employees aren’t really learning how to think critically anymore. They’re just… trusting AI.”
It’s not the first time we’ve heard this. And maybe you’ve wondered the same thing: is AI killing our critical thinking?!
Microsoft recently surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT and Copilot) at least once a week (source). The findings? Really interesting, actually.
What Microsoft discovered:
1. Yes, too much trust in AI equals less critical thinking! People with less self-confidence in the subject trust GenAI too much tend to think less critically and don’t question the answers. However, employees with more self-confidence are likelier to question, verify, and refine AI outputs, even if it takes more effort.
2. Critical thinking isn’t disappearing but is just evolving. GenAI is changing how we think critically. Instead of spending hours gathering information, employees now focus on:
- Verifying AI’s answers,
- Integrating AI outputs to fit specific tasks,
- Overseeing AI tools to ensure quality work (what Microsoft calls “task stewardship”).
As one participant said, “Instead of Googling a lot and reading threads, now, GPT gives me answers instantly. It’s faster, but I still check if it’s right.”
3. Convenience comes at a cost. GenAI makes tasks easier, whether drafting emails, writing code, or solving problems, but here’s the catch: the more we rely on AI, the less we exercise our own problem-solving muscles.
So, what’s the play here:
As leaders, it’s on us to make sure our people don’t lose their critical thinking by helping them build:
- Deeper subject matter expertise and confidence to challenge AI outputs.
- Skills in verification, response integration, and AI oversight
We’d love to hear your thoughts on how you’re seeing this play out in your team.