Todd Simkin on how to calibrate and improve your probabilistic assessments

Shane Parrish interviewed Todd Simkin, of Susquehanna International Group, for his Knowledge Project Podcast.

Simkin talked about common mistakes we make when making decisions under uncertainty. One is to be excessively afraid of being wrong:

Simkin: Nobody ever goes out and buys a printed book of Sudoku puzzles that are already printed and completed. But there are other contexts where people say, “I’m afraid that if I don’t know the answer here, and I don’t know who to ask for the answer, that I’m going to look foolish by trying to figure it out on my own. I’m going to make mistakes as I try to go through this.”

And in the world [of trading] that I live in, for better or worse, we know that we’re going to make mistakes all the time, that we’re effectively just running experiments and getting feedback, and then updating our approach and running a new experiment again, the next day. Experiments fail. [In fact] that’s one of the ways that they eventually succeed is because you’ve found enough ways that they fail.

Another common mistake is failing to incorporate the necessary nuance and the grades of probability into our thinking:

Simkin: The other thing that that happens when people want answers is they want certainty. [But] 20% chances happen, and we don’t think through things probabilistically, right? The world is just not set up for probabilistic thinking.

I think it was Phil Tetlock who I heard say that, “We talk about things with probability numbers but what we really have are three probability states. It certainly won’t happen, it certainly will happen, or it might happen.” And those are the only three [states] that we can process.

In order to think probabilistically, we need to constantly compare our guesses with the actual outcomes, and then update our beliefs accordingly:

Simkin: It’s really hard to say anybody’s wrong about a single probability prediction, unless they said that something was zero percent or 100%.

It’d be great if you could look back at all the times that you said something had a 20% chance of happening, and see how many times it [did happen]. If it only happened 10% of the time, you’re making a mistake. It’s certainly a mistake if it happened 70% of the time, and you said it was 20%. But importantly, you might be making the mistake of being too conservative in your estimate, not discriminating enough in using the probability distribution available to you.

He talked about two examples of people that have been able to think probabilistically about the world:

Simkin: Phil Tetlock has built an entire solution around this, which is to look for superforecasters. The way he measures the performance of the superforecasters is by giving them lots of things to predict, to come up with probability assessments on it. [And also] to be able to change their probability assessment as the information set changes, which is a really important thing in decision making. [It] avoids the binary “will or won’t,” and leads to much better nuance in the middle, and really understanding the probabilities. When you have lots of measurements, then you have feedback that you can use to to improve the quality of your predictions.

Weather forecasters are really good. I mean, just stupid good. And if you were to plot how often it rains, when they say there’s a 30% chance of rain, it’s like 30% of the time. It’s not 25 or 35%, that would be a pretty bad weather forecast. But that’s because they get daily feedback, right? They’re making predictions all the time. And they always know whether or not their prediction came true. And sometimes they say there’s a 30% chance of rain and it rains and that doesn’t mean they were wrong. It does mean that the people we’re talking about in the next day are going to say they were wrong. They’re gonna say, you know, I was listening to Andrew Freiden, my local news weather forecaster, and he said it was 30% chance of rain. So I didn’t bring my umbrella, now here I am all wet. Thanks, Andrew, right? But I know that if I listened to the local forecast over and over and over again, that that prediction is going to be pretty accurate.

That is not to say that thinking probabilistically is easily applied in all circumstances. Of course it has shortcomings:

Simkin: If you predict many different things — particularly if they’re uncorrelated, so you have a lot of repetitions, where you’re going to get feedback — then you’re going to have learning.

[But] if the only thing that you predict are very hard-to-predict outcomes, and you predict them very rarely, you’re not going to be able to calibrate very well at all.

And the places where, where people are terribly inaccurate or places where they’re making predictions about things where they either haven’t had the opportunity for a lot of feedback or where nobody’s going to hold them accountable for being wrong.

And the worst forecasts are often from people who are experts and make extreme forecasts. When they’re right, they’re going to be really right because they said something extreme. And that means that they’re going to get on the news program. They’re going to be interviewed on the cable program that night, because they were the ones who said, “There’s 100% chance that Chelsea wins the Premier League.” Then the people who have the more nuanced view and say, “Well, this is what I think the probability is. But if things shake out this way, then the probability is going to shift.” Those people don’t get the spotlight,


My bullet-point summary:

Metadata

Category  Risk & Uncertainty
Tags  Shane Parrish · Todd Simkin · Probability
Source  Todd Simkin – Making Better Decisions [The Knowledge Project Ep. #119]