How to Think in Bets: The Annie Duke Framework That Beats Most MBA Strategy Classes

How to Think in Bets: The Annie Duke Framework That Beats Most MBA Strategy Classes

Annie Duke played professional poker for 18 years, winning more than $4 million in tournament earnings, before retiring to write and consult. Her book Thinking in Bets (2018) condenses what she learned about decision-making under uncertainty in a domain where you're forced, every hand, to distinguish between the quality of a decision and the quality of its outcome. The core insight — that outcomes are noisy signals of decision quality, and that judging decisions purely by their outcomes is one of the most expensive cognitive mistakes business executives make — is so simple it's easy to dismiss. It's also, in my experience, the single most important framework mid-career operators underappreciate.

The Central Claim

Every significant decision is, in Duke's framing, a bet. You're choosing an action in the presence of uncertainty, with the expectation that some future states of the world will favor that action and other future states won't. The bet has an expected value (given your estimate of probabilities), a variance (some outcomes very different from the average), and a realised result (what actually happened, which is a single draw from the distribution).

The mistake: conflating the quality of a bet with the quality of its outcome. A +EV bet can lose. A -EV bet can win. If you evaluate your decisions only by results, in a world with real randomness, you'll conclude the wrong things — and you'll keep making the same kinds of errors because your feedback signal is corrupted.

In poker, the randomness is visible and legible. You can compute expected values. You can see directly that a hand which should have won 90% of the time will still lose 10% of the time, and attributing that specific loss to "bad play" would be wrong. In business, the randomness is harder to see, and the attribution errors are constant. The executive who made a good strategic call that happened to coincide with an unlucky macro shock gets blamed. The one who made a reckless bet that happened to hit gets promoted. The two signals — decision quality and outcome — are confused in the organisational memory, and the wrong lessons get drawn.

The Three Core Practices

1. Before the decision, state the probabilities out loud

When you're about to make a decision, articulate — in your head or in writing — your estimate of the probability of various outcomes. Not "I think this will work." "I think there's a 65% chance this works, a 25% chance it's inconclusive, and a 10% chance it's a clear failure."

The act of producing specific probabilities does three things. It surfaces hidden assumptions — you can't say 65% without implicitly weighing evidence. It creates a basis for later calibration — when the outcome lands, you can compare it against your prediction and update your model. And it forces honesty — it's much harder to convince yourself a shaky bet is "likely" when you have to put a number on "likely."

Duke suggests even simple three-point estimates are useful. Most people, if they produce numbers at all, produce them at single-point resolution ("I'm 70% confident"). A three-point estimate — "my main scenario is 60%, my upside is 25%, my downside is 15%" — maps better to how uncertain situations actually play out.

2. After the outcome, grade the decision, not the result

The harder half of the practice. When the outcome lands, don't ask "was the outcome good?" Ask "was the decision sound given what I knew at the time?" These are often different questions.

A good decision with a bad outcome is what Duke calls "resulting" if you learn from it the wrong way. Maybe the dice rolled against you. Maybe an unknown unknown showed up. Maybe the 20% downside case happened to realise. If the decision was sound — reasoning solid, probabilities sensible, action consistent with the probabilities — the decision wasn't wrong. The world was unlucky. You don't change your decision process in response.

A bad decision with a good outcome is the other trap. You got away with it. The temptation is to congratulate yourself, which reinforces the bad decision and increases the chance you repeat it. The right move is to acknowledge that you were lucky and identify what, specifically, would have made the decision sound — and to incorporate that into future decisions even though this one worked out.

This is harder to do honestly than it sounds. Our brains are designed to protect us from the discomfort of being wrong. The decision journal (covered in a companion piece) is the most reliable mechanical aid — writing down the reasoning upfront, reviewing it alongside the outcome, extracting the lesson from both pieces.

3. Build a "truth-seeking" group around yourself

The third practice is social. Duke's argument: individual thinking, even with good frameworks, runs into a specific cognitive limitation — you are the worst judge of your own biases. The fix is a small group of peers committed to giving each other honest feedback, challenging reasoning rather than affirming conclusions.

Duke calls this a "truth-seeking group," the frame borrowed from her poker study sessions with other professionals. The rules, as she describes them: genuine commitment to accuracy over comfort, willingness to disagree, specific structured practices for doing so (e.g., stating probabilities, articulating reasoning, identifying what would change your mind).

For most business contexts, this translates to: a small number of peers, at a similar seniority level, with whom you can honestly discuss decisions in progress. Not your team (power dynamics corrupt the honesty). Not your boss (more power dynamics). Peers. And not every colleague qualifies — you need people specifically willing to push back when they see flawed reasoning, which is not the same as saying every colleague who's smart.

In practice, most senior operators have a handful of people who fill this role — a peer at another company, a former mentor who's now a friend, a spouse who works in adjacent fields. The specific identity matters less than the specific relationship: honest, substantive, engaged with the actual reasoning.

The Applications That Matter Most

Duke's framework applies everywhere, but some applications have higher marginal value than others.

Hiring decisions

Executive hiring has perhaps the worst ratio of consequence to noise in professional life. The outcome is critical (the wrong hire can set back a department by a year). The signal is weak (interviews correlate poorly with performance). The probabilities are hard to estimate (base rates for executive hiring success are in the 50-65% range depending on the study). Thinking in bets forces explicit estimation of probabilities — "I'm 60% confident this hire will work" — and post-hoc calibration when it does or doesn't. Over a career, this calibration produces a measurable improvement in hiring quality, as opposed to the standard practice of "trust your gut," which continues producing 50-65% success rates across careers.

Strategic bets

Entering a new market, launching a new product, making an acquisition. Each of these is a high-variance bet with substantial uncertainty. The framework asks: what's my probability of success, on what timeframe, with what specific evidence would I update? The discipline produces clearer strategies because it forces you to separate "this is a good bet at these probabilities" from "this is definitely going to work." Many strategic decisions are best understood as good bets rather than sure things, and the organisational language often obscures this.

Personal financial decisions

Most personal investing decisions — to take the new job, sell the house, start the business — carry meaningful variance. Thinking in bets treats them honestly. "I think there's a 60% chance this works out well over five years, a 25% chance it's a wash, and a 15% chance it's a real setback." Better than the usual "I think this will be great." It preserves your ability to distinguish, later, between decisions that were well-made but unlucky and decisions that were overconfident and exposed.

The Cultural Resistance

The framework faces specific resistance in corporate cultures. The language of probability can sound weak — "I'm 65% confident" reads, in rooms accustomed to certainty, like hedging. CEOs who state probabilities rather than conclusions get pushed back on. Boards want decisions, not odds.

The workaround is not to hide the probabilistic thinking but to translate it into action-oriented language while preserving the honesty. "We're going to commit to this market entry on the basis that it's more likely than not to work within 18 months, and here's what we'll watch for in the first six months that would change our view" is the same as a probability estimate, stated in a register that organisations can hear.

The specific skill: speak in actionable commitments, but think in probabilities privately and in your decision journal. The decision journal entries are probabilistic. The room-facing statements are resolved into action. Both pieces are needed; confusing them in either direction — publicly hedging every claim, or privately fooling yourself into false certainty — produces worse outcomes.

The Honest Limit

Thinking in bets works best in domains with observable outcomes and enough decisions to produce a statistically meaningful track record. It works less well in genuinely one-shot high-stakes decisions where you can't build a calibration sample — picking a spouse, choosing where to live for the next 20 years. In those domains, the probabilistic framing still has some value as a thinking tool, but it can't produce the feedback loop that generates better calibration over time.

For the vast majority of decisions in a senior career, though — thousands of hiring calls, strategic bets, negotiation moves, product decisions — the framework is the single highest-leverage mental upgrade available. The cost is the discomfort of stating probabilities explicitly. The return is, over a decade, genuinely better judgment than peers who continue to think in certainties and learn from outcomes alone. Most MBA programmes don't teach this, for reasons I've never fully understood. The return on spending 60 pages with Duke's book is higher than the return on most semesters of business school.