« All Episodes

Seek Context to Offset Prediction Errors

Published 2/1/2022

When we have context, we predict more accurately. When we translate numbers into more meaningful concepts, we create context where previously things were fuzzy. This makes our predictions more meaningful and grounded.

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

📮 Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)
Prediction is hard and it's especially hard when you're using precise mechanisms to explain your prediction. We're going to start out today's episode with a little game that shows the difficulty in precise prediction. My name is Jonathan Cutrell listening to Developer Tea. My goal on this show is to help driven developers like you find clarity, perspective, and purpose in their careers. When we estimate, we are predicting. Even when we try to estimate with a lot of padding, we still are using the same precise mechanisms that we use even when we aren't adding any padding. This is difficult because we are trying to distill a bunch of information that we have down into a singular vector of information. That information is including a bunch of uncertainty, it's including our ability to commit to something, maybe the team's ability to commit to something, it might be including some kind of historical reference and even a trajectory. It very often is including things that we hope for, but it's also influenced by things that we think others hope for. It's influenced by what we think people want to hear. Even when we have disclaimers in front of these requests for estimation, these social factors don't just magically disappear. We still have pressure and we still have a lot of competing factors that make estimation and prediction hard. This is true about our work and it's true in almost every scenario where we're predicting things. We have the same pressures but all of the various things that go into a prediction, all of the randomness and a variety of factors that are very difficult to weigh likely go into virtually every meaningful prediction we try to make. Let's walk through a simple example of why it is difficult, not only to predict something but also why it's difficult to feel good about a prediction. If you were to hear that a project was going to take about four and a half months and then imagine that from someone else you heard that the project was going to take about seven months. In our heads, the differentiation between these numbers is fairly small. But the actual difference between these numbers, if we were to look at it four and a half, is about 64% of seven. This doesn't feel like there's this much of a disparity between the numbers. Part of the reason for this is because our brains are just not very good at computing differences between numbers. This is especially true if we start getting into much higher numbers. Or if we're going to get into very low numbers as well, which have the same problem. The high number of digits breaks the way that we think. When we have these comparisons between numbers, it's hard to grasp exactly what the differences are. In the absence of the two numbers, in other words, let's say that you only had an estimate of seven weeks or you only had an estimate of four and a half weeks, you're likely to think about those numbers in a very similar way, even though there's a large disparity between them. When we're doing these kind of gut estimations or even when we do an estimation that is driven by consensus, a 10 or 15 or 20% error, just on perception. In other words, not an error based on unexpected factors, but actually an error of what you're saying versus what you mean a 10 to 15 or 20% error is easy to come by. This is especially true when we start getting into exponential errors or multiplied errors. For example, let's imagine that you have $500 and your friend has $5,000. They need to be used to spend $3 a day. You would run out of money in about five months, but they would run out of money in about four and a half years. You probably as an engineer, you say, well, of course, this is just math, of course, this is the way things work. But intuitively, when we think about the differences between these numbers and how things add up and how things multiply, it's hard to conceptualize the difference between these numbers and specifically the relative difference between these numbers. As a quick example, if you were to think of the numbers 14 and 16, you probably think they're very close together. And if you were to try to guess just intuitively what the percentage difference is between 14 and 16, it's likely that you wouldn't guess that 14 is only 87.5% of 16. Again, we have to kind of tread carefully here because everybody thinks about numbers differently, but that's kind of the point. There are going to be some people who listen to this podcast and think, well, I thought they were actually further apart than that. And some people will listen to it and think that, well, that's exactly what I thought they were because you did the quick math or because you've worked with numbers in that range pretty often. But the simple truth remains that what we believe to be a concrete and easily represented concept, which is these numbers or estimations, all these different factors, the way that we weigh them and communicate them is not concrete at all. And that our perception changes. Right? Our perception changes what the numbers mean. We can easily snap to, for example, if you were to have somebody who estimated 14 points and you have another person who estimated 16 points or whatever the system is that you use the order of magnitude of those two estimations is very close. But again, if you were to retrospectively look at, let's say, a 10% error of your estimations to your, let's say, work completed, 14 and 16 is greater than a 10% error. So what can be done? The truth is the estimation or prediction, all of this has to be done with the meaning attached. This is especially true in our software teams. It's especially true when we're trying to predict what's going to happen with our jobs or with our finances that we need to think in terms that convey some context. And we would probably benefit from doing a good amount of interpretation, of expansion on these numbers, of showing that context as often as we can in the planist terms that we can find. Instead of relying on a single vector metric for any particular thing, we should be thinking about a picture. We should be thinking about a more human way of describing the numbers that we're using in our discussions. Thanks so much for listening to today's episode of Developer Tea. If you enjoyed this discussion, the best way you can kind of give back to the show is to leave a rating or a review in whatever system you use for listening to podcasts. Of course, iTunes is one of the most important ones. Also, if you want to continue this discussion, head over to the Developer Tea Discord community. This is a totally free community for you to join. You're never going to charge money for it or try to monetize it in any way. To join, just go to developer.t.com slash discord. The more engineers that we get in there that talk about these kinds of topics, the better the discussion will be. Thanks so much for listening and until next time, enjoy your tea.