Technology

Does Using AI Actually Make You Worse at Problem-Solving?

New research from major universities shows that using AI assistants for even brief periods can measurably reduce problem-solving ability afterward. Brain scans reveal changes in how the brain processe

Martin HollowayPublished 6h ago6 min readBased on 10 sources
Reading level
Does Using AI Actually Make You Worse at Problem-Solving?

Does Using AI Actually Make You Worse at Problem-Solving?

A study published today found something worth paying attention to: using AI assistants for just 10 minutes can measurably affect how well you solve problems afterward, especially once the AI stops helping. Researchers from Carnegie Mellon University, MIT, Oxford, and UCLA tracked hundreds of people through three experiments involving math problems and reading comprehension tasks.

The findings showed that when people had relied on AI chatbots like ChatGPT, they were significantly more likely to give up on problems or get answers wrong once the AI was taken away. This pattern showed up across different types of tasks, suggesting a consistent effect on how our brains approach problems rather than just interference with learning a specific skill.

What Your Brain Does Differently With AI

Parallel research from MIT's Media Lab adds a neurological angle to these findings. Nataliya Kos'myna's team used EEG brain scans — the kind that measure electrical activity across different brain regions — to look at what happens when people use ChatGPT, do traditional Google searches, or work without digital help while writing essays. The scans showed real differences in how the brain connects information and recalls it depending on which tool was in use.

The MIT study, titled "Your Brain on ChatGPT," documented what researchers call "cognitive debt." Think of it like this: when AI handles a cognitive task for you, the brain regions normally responsible for reasoning and remembering that information show less activity. It's not that the brain shuts down — it's that it relies less on its own machinery when the tool is available to do the work.

Trust, Confidence, and Knowing When to Use AI

Additional research examined how much people trust AI systems and when they choose to use them. In programming contexts, something counterintuitive emerged: people with higher confidence in AI's abilities often made worse choices about when to actually use it. They turned to the AI when they might have learned more by solving the problem themselves.

However, the pattern wasn't the same for everyone. People with stronger foundational programming skills showed better judgment about when AI assistance genuinely helped versus when it might get in the way of learning.

How Students Actually Use AI

A survey of 1,000 college students from the University of Southern California revealed something clear about usage preferences. Most students want AI to simply complete tasks for them — what researchers call "executive help" — rather than asking it for clarification, research guidance, or other forms of support that keep the student in the driver's seat. This preference aligns with the experimental findings, where relying on AI for task completion led to dependency rather than skill development.

The educational research also connects to something researchers already know: roughly one in ten school-age children struggle with working memory — the mental capacity that lets you hold and manipulate information while solving problems. When AI assistance is introduced early in learning, these challenges may actually get worse over time.

Adoption Patterns Across Different Contexts

Research with 363 Chinese users examined what draws people to AI assistants using an established framework for technology adoption. The study identified three main factors: how pleasing the tool feels to use, how good the information is, and users' existing experience with AI. People's decisions to adopt AI were most strongly driven by perceived usefulness and ease of use — in other words, the immediate payoff. Longer-term cognitive effects rarely entered the calculation.

A smaller study with 68 language learners found similar results. AI conversation tools did help with specific language skills, but the research highlighted an important pattern: AI can either enhance human thinking or replace it, and understanding the difference matters.

The broader context here involves a historical parallel. About three decades of technology adoption have taught us this pattern before — most clearly when GPS navigation became widely available. Early studies showed it improved how quickly people found routes, but later research found that heavy GPS users showed measurable declines in spatial reasoning and navigation ability. The current AI research suggests a comparable trade-off, but at a more fundamental level — affecting how we approach problems rather than just one type of skill.

What This Means for Work and Learning

The convergence of behavioral observations, brain imaging evidence, and real-world usage data points to a central tension in how organizations are rolling out AI tools. While companies are pursuing AI adoption for immediate productivity gains, this research flags something that may cost more in the long run: the potential erosion of independent problem-solving ability.

MIT's Michiel Bakker, one of the researchers involved in the multi-institutional study, suggests that widespread AI adoption might gain productivity in the short term while eroding fundamental problem-solving skills over time. This becomes particularly important in professional work where the ability to think independently and adapt remains critical.

The research documents a measurable phenomenon — brief interaction with AI creating lasting changes in how people approach subsequent problems. However, the field currently lacks long-term data on whether people recover these skills once they stop using the AI, or whether specific strategies could prevent the erosion in the first place.

In my view, it's important to recognize the limitations of these studies as they exist right now. They examine relatively controlled situations with clearly defined problems and short-term use of AI. Real life is messier: people interact with AI systems continuously, across work and personal life, often simultaneously. The cumulative effects of constant AI assistance across multiple areas of life may intensify what these studies show, or may introduce entirely different patterns we haven't seen yet. That's an empirical question that requires more data.

The practical takeaway is straightforward: AI assistance tools appear to deliver immediate capability gains while potentially creating a form of dependency. How we manage that trade-off — learning to use AI as a tool for enhancement rather than replacement — may determine whether current AI strategies deliver lasting value or inadvertently weaken the cognitive skills knowledge workers actually depend on.