Solve with AI

Solve with AI

Share this post

Solve with AI
Solve with AI
Is ChatGPT Making C-Level Smarter or Dumber?

Is ChatGPT Making C-Level Smarter or Dumber?

Introducing the S.I.M.S.™ Framework for Strategic Intelligence via Mental Simulation

Sameer Khan's avatar
Sameer Khan
Aug 02, 2025
∙ Paid

Share this post

Solve with AI
Solve with AI
Is ChatGPT Making C-Level Smarter or Dumber?
1
Share

Hey Productivity Explorer,

One thought that has made me curious as we delve deeper into solving complex problems with AI is how we distinguish between noise and signal.

In this context, “noise” refers to the chatter and ongoing debate about whether using AI actually boosts human intelligence or quietly erodes it. Is the signal we’re chasing making us sharper as leaders, or introducing hidden cognitive debt? (We’ll unpack this more as we go.)

Instead of joining the endless debate, I thought: why not flip the script and examine both sides in action?

I spent all my evenings last few weeks reading top research papers on this topic. I reviewed research from Axiv, MIT, Cornell University, and other independent peer-reviewed sources. reviewed research (links and source below).

Let’s break down what makes for smart versus self-defeating AI use in the real world. To make this clear and to keep it grounded in the kind of executive decision-making my readers face every week, I want to introduce you to two hypothetical, but entirely plausible, mid-market execs: Alex and Paul.

It was a Wednesday afternoon, just past 2:00 PM. Two execs were preparing for critical presentations to investors that would shape their next quarter. Their desks were cluttered with reports, half-finished coffees, and open laptops. But here’s where their similarities ended.

Paul, CEO of a 40-person SaaS startup, stared at the blinking cursor on his ChatGPT window, typing quickly: “Give me a summary of recent market trends in enterprise software.” Seconds later, he copied the polished response into his slide deck. Easy enough. "This AI thing is fantastic," he thought, confident he was ready for tomorrow's investor call.

Across town, Alex, COO at a growing 175-person logistics firm, took a different path.

Instead of straightforward prompts, she crafted a complex scenario: “You’re the most skeptical investor I could face. Aggressively challenge my growth projections, market assumptions, and logistics strategies. Push back on every weakness you find”.

What followed was a rapid-fire, intense debate. ChatGPT, in its skeptical investor role, pointed out vulnerabilities Alex hadn’t yet fully considered. Her assumptions were questioned, her strategy pressure-tested, and her confidence stretched, but critically, not broken. Each question forced her to think deeper, sharpen her insights, and refine her arguments.

After a challenging back-and-forth simulation, Alex refined her narrative, stress-tested her logic, and adjusted her strategy. By evening, her thinking was sharper, her message clearer, and her confidence stronger.

The next morning, Paul felt uneasy midway through his presentation when an investor asked a probing question he hadn’t anticipated. He stumbled, offering vague answers, his credibility slightly dented.

Alex, however, navigated her session masterfully. Every objection had already been rehearsed, every doubt already countered. She owned the meeting.

By giving this example, I am trying to help you answer the fundamental question facing all of us in the AI era:

Is ChatGPT making you smarter, or is it quietly making you dumber?

In the next section, we’ll dive into exactly why some execs use AI tools to sharpen their leadership intelligence, while others unwittingly allow the same tools to dull their cognitive edge.

Table of Contents

  1. A Tale of Two Execs: Passive Paul vs. Active Alex

  2. What the Research Actually Says About AI and Executive Intelligence

  3. Introducing the S.I.M.S.™ Framework: Strategic Intelligence via Mental Simulation

  4. Advanced Simulation Prompts for Executives

  5. Which Exec Are You Becoming?

    1. Mega prompt: Structure your Executive Intelligence Accelerator Engine.

What the Research Says: AI, Intelligence, and the “Dumbing Down” Debate

The contrast between Alex and Paul’s experiences might seem dramatic, but it’s grounded in real research.

While headlines like “Is AI Making Us Lazy?” or “ChatGPT is Dumbing Down Users” might be overly sensational, beneath the noise lies a signal: it’s not AI itself that determines intelligence or performance, but it’s how leaders use it.

Recent research out of the MIT Media Lab explored this exact dynamic. In a study titled “Your Brain on ChatGPT”, researchers observed that when participants leaned heavily on generative AI to complete tasks, they experienced reduced neural engagement, weaker memory retention, and lower cognitive recall. The more passive the use, the worse the outcome, especially in tasks requiring analysis or creativity.

That same study also drew a sharp line: when users simply accepted AI answers without challenging them, their cognitive offloading increased, meaning they stopped forming and reinforcing their mental models. It’s like skipping the workout but expecting to build muscle.

This maps almost perfectly to what we saw earlier with Paul. By outsourcing his thinking to ChatGPT, he bypassed the effort required to internalize insights, simulate objections, and prepare for strategic depth. He had answers, but not understanding.

Now contrast that with Alex. What she did aligns closely with best practices from both academic research and field observations. A recent meta-analysis of over 8,000 participants found that while generative AI alone doesn’t outperform humans in creativity tasks, humans who use AI as an augmentation tool, actively steering, questioning, and refining, do significantly better.

The effect size was notable: a +0.27 gain in outcomes when AI was used actively and strategically, versus a –0.86 drop when users relied blindly on AI outputs.

Source: Arxiv.org

Even more revealing, a 2024 study on design thinking and AI showed that people who were exposed to AI-generated examples in early ideation tended to converge faster, but their ideas lacked diversity and originality. In other words, AI guided them, but often into a creative rut.

This is what I call the convergence trap.

It’s what happens when leaders let ChatGPT complete their thinking instead of complicating it in productive ways. Passive users (like Paul) converge too quickly, missing nuance. Active users (like Alex) simulate, debate, and refactor, producing sharper decisions that hold up under pressure.

I’ve also seen this firsthand in my work with founders and operators. The ones who come out on top don’t use ChatGPT to get answers, but they use it to run mental simulations, role-play customer objections, and stress-test their GTM strategies.

They’re not offloading intelligence. They’re compounding it.

So what’s the takeaway?

AI isn’t inherently making you smarter or dumber. It’s a mirror. It reflects your intent, magnifies your method, and scales the quality of your thinking. If you’re thoughtful, rigorous, and simulation-driven, it sharpens you like a whetstone. If you’re lazy, rushed, and surface-level, it dulls your edge without you even realizing it.

In the next section, I’ll break down the exact framework I teach leaders to transform from crutch to catalyst and start using ChatGPT as the ultimate simulation engine for better decision-making.

Simulation Is the New Leverage: Introducing the S.I.M.S.™ Framework

Keep reading with a 7-day free trial

Subscribe to Solve with AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Sameer Khan
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share