Live

"Your daily source of fresh and trusted news."

The Role of AI in Enhancing Analytical Thinking Skills

Published on Apr 3, 2026 · Darnell Malan

You asked AI for an analysis—why do you feel less certain afterward?

You paste a messy situation into a chat—“Should we raise prices?” “Why is churn up?”—and get a clean, confident answer in seconds. Then you try to act on it, and your certainty drops. The output feels complete, but you can’t tell which parts came from your reality and which parts were filled in to make the story hang together.

That gap shows up because the tool can sound precise while quietly guessing at missing inputs: baseline numbers, constraints, customer behavior, timing. If you don’t name those, you’re deciding to accept defaults you didn’t choose.

The fix starts before you judge the reasoning: figure out what you wish you had told it.

When a “good-sounding” answer shows up fast, what decision are you actually making?

When a “good-sounding” answer shows up fast, what decision are you actually making?

What you wish you had told it usually becomes obvious right after you read a sharp, tidy recommendation. It lands fast—“raise prices 5%,” “cut this channel,” “switch to annual plans”—and the speed itself feels like evidence. But the real decision you’re making in that moment isn’t whether the recommendation is right. It’s whether you’re willing to adopt a set of unstated assumptions as if they were facts.

If you accept the output at face value, you’re also accepting defaults about your baseline (what’s normal), your constraint (what can’t change), and your goal (what “better” means). In a churn analysis, that could be silently treating seasonality as noise, assuming your mix shift doesn’t matter, or acting like support backlog can’t drive cancellations. Those are business choices, not wording choices.

You can’t audit everything mid-sprint. So you need a quick way to surface the missing inputs you’d want before trusting the story.

Before you trust the output, pin down the inputs you wish you had

A typical moment: you’re about to paste the recommendation into a doc or Slack, and you realize you don’t even know what data it assumed you had. That’s the cue to pause and list the inputs you’d be annoyed to discover were wrong later. Not “more context” in general—specific blanks.

Start with three buckets. Baseline: what does “normal” look like (last quarter churn, last year’s seasonality, typical win rate)? Constraints: what can’t move (price floors, headcount, contract terms, launch dates)? Objective: what are you optimizing for (net revenue, cash, retention, payback time)? If you can’t answer one, write it as a question the model must not guess, like “assume churn is seasonal unless we see a cohort break.”

This does take extra minutes, and it can feel slow when you’re under pressure. But it’s faster than debating a polished story built on made-up numbers—and it sets up the next step: asking what would have to be true for the plan to work.

The moment to ask: ‘What would have to be true for this to work?’

The moment to ask: ‘What would have to be true for this to work?’

That “what would have to be true” question matters most right when you’re tempted to treat the recommendation like an action item. In a real week, you’re juggling a backlog, a deadline, and a few partial dashboards. So instead of asking whether the output is “good,” you ask what conditions must hold for it to be a smart move.

Take “raise prices 5%.” For that to work, you need demand to be less sensitive than the model implied, your sales team to hold the line, and your churn risk to stay within a range you can absorb. If any of those aren’t true, the same change can flip from “easy win” to “quiet revenue leak.” The point isn’t to predict perfectly. It’s to name the handful of assumptions that carry the result.

Do it as a short list: 3–5 “must be true” statements, each tied to something you can check or observe. The limitation is obvious: you may not have clean data, and you may not get agreement fast. Still, this turns the next prompt into the useful one: “What evidence would change your recommendation?”

Run two competing stories instead of polishing one narrative

“What evidence would change your recommendation?” usually gets one tidy list—then you start editing it into a single clean narrative. That’s when you should force a split. Ask for two stories that could both fit the same facts: one where the recommendation works, and one where it fails for a specific reason you can name.

Example: “Raise prices 5%.” Story A: demand is steady, discounting stays flat, and you gain net revenue with manageable churn. Story B: deals slip, reps widen discounts to compensate, and you lose the accounts that subsidize your support load. Same action, different mechanism. If you can’t explain the mechanism, you’re just picking a vibe.

This is harder than it sounds because it takes ego out of the loop—you’ll feel yourself defending the first answer. It also takes time to collect signals for both stories. Still, once you have two plausible paths, you can verify a few key facts quickly instead of arguing about the wording of one polished plan.

Quick verification habits: where to look, what to check, how to keep it lightweight

Once you have two plausible paths, the fastest move is a quick “reality scan” before you repeat the recommendation. Open the one dashboard or report you already trust (billing, product analytics, CRM), and check only the numbers that would flip the choice: the baseline, the trend break, and the segment doing the damage. If the model claims “churn is up,” verify whether it’s logo churn or revenue churn, and whether it’s concentrated in a plan, cohort, or channel.

Then do one outside check. Look for a primary source you can link: a pricing page, a contract clause, a release note date, a support backlog graph. If the output relies on market facts (competitor pricing, regulation, benchmarks), don’t trust a single citation—spot-check two independent sources or assume it’s uncertain.

Keep it lightweight by time-boxing: 10 minutes, three checks, write down what you found and what you couldn’t confirm. The real cost is that you’ll sometimes pause a “good” idea—but you’ll stop shipping decisions built on guessed inputs, which is usually the expensive mistake.

Leaving the chat with stronger thinking: a repeatable ‘closeout’ for real work

Those 10 minutes only pay off if you leave the chat with something you can reuse. Before you paste anything into a doc, do a quick closeout: write the decision in one sentence, then list (1) the 2–3 assumptions that carry it, (2) the three numbers or facts you checked, and (3) what you didn’t verify yet. If you’re handing it off, add “Here’s what would change my mind,” so the next person knows where to push.

This isn’t free. It adds a few minutes and can feel like paperwork when the team wants speed. But it turns AI output into an audit trail you can defend—and a prompt template you can run again the next time the same kind of question shows up.

You May Like