Decision Journals: The Boring Habit That Makes Smart People Measurably Better

Decision Journals: The Boring Habit That Makes Smart People Measurably Better

Michael Mauboussin, the strategist now at Morgan Stanley, has written about decision journals in multiple papers and talks over the last fifteen years. The research he cites, mostly from Philip Tetlock's work on forecasting and from studies of expert judgment, converges on a pattern that almost nobody in business actually practises: keeping a written record of decisions — the decision, the reasoning, the expected outcome, and the actual outcome — measurably improves judgment over time. Not by a small margin. By a large one. And the habit is cheap, requires about 10 minutes per significant decision, and has been available to anyone since the invention of writing.

Almost nobody does it. Including, in most cases, the people who know they should. Including, if I'm honest, me for the first seven years of my career. The reason isn't lack of discipline. It's that the benefits are invisible in the short term and the costs are immediate. A month of decision journaling produces almost no noticeable improvement. A year of it, compared with no journal, produces a subtle but measurable shift in how you make decisions. Five years produces a material advantage in judgment that people around you can't articulate but can feel.

Why Journaling Decisions Actually Works

The mechanism is specific and has nothing to do with motivation or personality. It's about separating decision quality from outcome quality — a distinction that sounds academic and turns out to be the whole game.

Annie Duke, in Thinking in Bets (2018), makes the point more cleanly than anyone: a good decision can produce a bad outcome, and a bad decision can produce a good outcome, because the world contains randomness. If you judge decisions only by their outcomes, you'll reward luck and punish skill roughly in proportion to how noisy the domain is. In noisy domains — most of business, much of investing, almost all of career choice — outcome-based learning is actively misleading. You'll conclude the wrong things about what works.

The decision journal forces you to evaluate your reasoning separately from the outcome. Did I identify the relevant variables? Did I weight them sensibly? Did the thing I expected to happen actually happen, or did something I didn't anticipate flip the outcome? Over time, this separation produces genuine calibration — you start to know when your reasoning was sound even though the result was bad, and (more importantly) when your reasoning was weak even though the result was lucky.

The Minimum Viable Journal

Most people who try decision journals build elaborate templates with 15 fields and quit inside a month. The overhead kills the habit before the benefit compounds. A minimum-viable template has four fields, fits on a single page, and takes 10 minutes per entry.

  1. What's the decision? One sentence. Specific enough that you'll know, reading it in a year, what you were actually deciding.
  2. What's my reasoning? Three to five bullet points. The main considerations. What I'm betting on. Which assumptions are load-bearing.
  3. What do I expect to happen? Concrete predictions with rough probabilities. "60% chance this product line breaks even by month 18. 30% chance it's clearly working by month 9. 10% chance we kill it by month 12."
  4. What would make me realise I was wrong, and by when? The falsifiability test. Specific evidence, specific timeframe.

That's the full template. Keep it in a plain text file, a Notion page, an Obsidian note, whatever you use for notes. One entry per significant decision. "Significant" is roughly: a decision that affects things for more than three months, involves a meaningful amount of money or effort, or could materially affect your career or a critical relationship. Not every decision. Maybe one a week, on average.

The Review That Makes the Whole Thing Work

The journal entries are not the point. The review is the point. Without systematic review, you're just writing in a diary.

The review cadence that works for me: quarterly. Every three months, I open the journal and look at every entry from six to twelve months back. For each, I ask: did what I expected happen? Was my reasoning sound, regardless of outcome? What did I miss? What pattern is showing up in my misjudgments?

The first few reviews are painful. You'll find you were systematically overconfident on specific categories of decision. You'll find you consistently underestimated how long things would take. You'll find you overrated certain kinds of evidence and underrated others. These findings are the entire value of the exercise. The review is where the self-knowledge gets extracted.

Tetlock's research on superforecasters — the top 2% of forecasters in a massive US intelligence community study — showed that the single most reliable predictor of forecaster improvement was systematic review of past predictions with honest self-assessment of errors. Not intelligence. Not training. Not domain knowledge. The review habit. The superforecasters were, on average, less domain-expert than professional analysts, but they reviewed their work more ruthlessly. Over time, the review gap compounded into a measurable capability gap.

The Specific Biases the Journal Exposes

A few patterns that show up reliably for people who keep journals long enough to see patterns.

Overconfidence in specific domains

Most people are well-calibrated on general-knowledge questions and poorly calibrated on questions within their own expertise. The journal reveals this. You'll be right about 80% of the time on the 50% predictions and 55% right on the 90% predictions. The gap is your overconfidence signature — and it's usually concentrated in the domains where you consider yourself expert. Humbling and useful.

Planning fallacy

Everything takes longer than you expect. Every project. Every negotiation. Every hire. The journal shows this as a systematic bias — your time predictions are short, on average, by 40 to 80%. Knowing this doesn't completely fix it (the fallacy persists even after you know about it), but it lets you apply a correction. "I think this will take three months" becomes, in practice, "plan for five."

Narrative coherence beats probability

People consistently over-weight decisions that come with a clean story and under-weight decisions that are probabilistically better but narratively awkward. The journal reveals this because you can see, in retrospect, that the story-heavy decisions often went worse than the unflattering-but-higher-EV alternatives would have. Kahneman's work on narrative thinking is specifically warning about this; the journal lets you detect it in your own decisions.

Underweighting base rates

The inside view — "this specific hire will work because she's great" — consistently beats the outside view — "40% of executive hires fail regardless of individual excellence" — in live decisions. The journal reveals the pattern: your 70%-confidence hires work out at about 55%. The gap is the base-rate-ignorance tax. Once you can see it, you start triangulating inside-view judgments against outside-view base rates, and your calibration improves.

The Objection Most People Have

"I'll never stick with this." Mostly true, at first. The habit takes about six months to become automatic. The reason most people abandon it is that they start with too much structure, too many fields, too formal a template. The simpler the entry, the higher the probability you'll actually write one.

The second objection: "The entries are uncomfortable to revisit." True. Looking at your past self's overconfidence and errors is specifically uncomfortable. That discomfort is part of the mechanism — it's what produces the recalibration. If the reviews were pleasant, they wouldn't work.

The third objection: "My decisions are too varied to see patterns." Usually wrong. Keep the journal for a year. You'll see patterns. They may not be the patterns you expected — they're often subtler — but they're there.

What the Journal Doesn't Do

Worth being clear about the limits. The journal won't make you smarter. It won't fix structural problems in your business or your career. It won't prevent you from making big mistakes — it will, at best, let you recognise the shape of your mistakes faster than you otherwise would.

The journal also works less well for short-feedback domains where you'd naturally get calibration without writing anything down. A trader making 50 trades a week with clear P&L doesn't need a journal to know her calibration — the market is telling her daily. A manager making two strategic bets a year, where each bet plays out over 18 months, is in a long-feedback domain where natural calibration is impossible. The journal matters most in long-feedback domains, and business is full of them.

The Thing Nobody Notices Until Year Three

The deepest benefit of a decision journal is unexpected and takes years to surface. After enough time keeping one, you start making decisions differently — not because you're consulting the journal, but because the habit of articulating reasoning upfront has shifted how you think. You're forced to be specific about what you're betting on. You notice, in the moment, when you're relying on story rather than probability. You ask yourself, preemptively, what would make you wrong — because you know you'll have to write it down afterwards anyway.

The journal, at that point, has become a kind of thinking scaffold rather than just a record. The entries get shorter because the reasoning happens faster. The decisions get subtly better because the upstream habits have improved. This is the version of the practice that compounds. Most people quit before reaching it, which is why it remains one of the quieter but real sources of long-run judgment advantage — it's available to everyone, requires almost no talent, and almost nobody sticks with it long enough to get the full return.