Eli Rose 's postsabout code trinkets quotesmedia log
Explicitly Include "Gather More Information" In Your Expected Value Calculations

[Epistemic status: providing suggestions for how to think about something (or how to justify how you already think about something) based on my own experience.]

When deciding between various options under uncertainty, one attractive framework is to calculate the expected value of each option, then choose the option with the highest expected value.

For example, suppose you were choosing between options A, B and C. You would look at the range of outcomes possible under option A and multiply the probability of each outcome by the value of that outcome to get a single number. After doing the same for options B and C, your choice is clear.

I find this framework helpful. Even though I'm never sure what the numbers should be exactly, sometimes rough numbers can show that of two worries which had been taking up about equal space in my mind, one of them is 100x more important than the other one*. Or perhaps one option is strictly superior/inferior to another option, and so can be cut altogether. I never crunch actual numbers with a calculator when doing this (perhaps I should) but I do visualize the summands of the mathematical expression in my head, absorbing others or being absorbed according to their magnitude. In addition to helping me make better decisions, I use the expected value framework as a ritual to give myself permission to move forward with decisions I've already made.

The expected value framework is my friend; however, unmodified expected value is more like a friend who's always interesting to be around than a friend I would move in with. It can be a rushed and blinkered method of reasoning. For example, I'm looking for jobs and an exciting opportunity has come up. It's risky but high-value. There's also a safer but lower-value job. Expected value would say that you should try to quantify the risk and value in each case, then multiply them together and choose the one with the higher number. Visualize the mathematical expressions in your head. Two possible outcomes, a clear path forward: let's dive in and start estimating risks and values.

Only, hold on, why are there just two possible outcomes? What about "wait for something even better" or "learn more to reduce uncertainty"? It's easily possible that these are the best things to do, but I find that I don't think of them when trying to use the expected value framework. Maybe it's because numerically comparing a meta-option like "learn more" with a concrete option like "Job A" feels wrong — it feels like they ought to be on different levels. Maybe it's that the archetypical examples of expected value reasoning I have in my head are drawn from examples like games of chance, where probabilities and payoffs are already known.

This missing piece almost never actually leads to me making decisions too quickly. Rather, it leads to me ignoring the "rational" expected value conclusion because it seems too hasty, then worrying that this means I'm being irrational.

Hence, my suggestion for improving expected-value reasoning: always explicitly consider the "learn more, then re-evaluate" option. Put it right next to the other terms in the calculation.

$$\text{max}(\text{Take Job A}, \text{Take Job B}, \text{Learn More})$$

As an action you can take in the world, it is no less of an option than the others. Trying to calculate its expected value will lead you to ask important questions like

  • "what is the cost of delay?"
  • "what is the cost of additional deliberation?"
  • "how much more useful information can I expect to get?"
  • "how would I actually go about getting that information?"

And if your brain works like mine, considering "learn more" as a first-class option will give your decision to do it the "approved by rationality" stamp, so you can feel more comfortable making it.

* Note that this gets more complicated if you don't care about the nth unit of value just as much as the 1st; i.e. if your utility is linear in value. I've noticed that there is almost nothing in my personal life for which I have utility linear in value. So I don't use this framework for those decisions.