ยซ All Episodes

Avoid Unnecessary Prediction

Published 10/15/2021

When you have to predict, you're likely to have error in your prediction. Sometimes that error is easily handled - other times, it can be catastrophic in effect.

The next time you have the urge to predict into the future, ask yourself - what benefit do you gain from predicting this now? Can you avoid unnecessary prediction all together with a different route?

๐Ÿ™ Today's Episode is Brought To you by: Compiler

Compiler is a brand new podcast from RedHat where the hosts answer the most complicated questions about our work. Demystifying the tech industry, one question at a time! Find it wherever you download podcasts, or on the official website.

๐Ÿ“ฎ Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

๐Ÿ“ฎ Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

๐Ÿงก Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)
When I ask you to think about the future, to predict it in some way, particularly your future, what are you likely to predict? For the most part, humans are, as we know, incredibly optimistic. This isn't always true, sometimes we catastrophize, and we imagine things to turn out much worse or the worst possible outcome or one of the worst possible outcomes. Very rarely would we predict something realistic. Very rarely do we predict something that actually happens. If you were to try to imagine what kind of music you're going to like, for example, in ten years, right now, it's likely that you would pick something that you currently like, but if you were to go back and look at your musical taste ten or twenty years ago, you probably wouldn't have expected it to change as much as it has. Of course, we have talked about this kind of thing on the show before. Our errors in prediction, we especially talk about it with relation to predicting the amount of energy, a particular task, software, estimation. How much that's going to take to build? This is a constant question that we face as software engineers. How long or how much would it cost to build X, Y, or Z? But there are a litany of other errors that we make when we try to predict things. The reason for this is pretty obvious is that there's so much that we can't calculate. And even if we could calculate it, there is also random factors. If we knew every probability perfectly, it's imagine that we knew, for example, that a probability was ninety-nine out of a hundred, right? A ninety-nine percent chance that X, Y, or Z is going to happen. We can most of the time safely predict something that falls in line with that ninety-nine percent, but even when we have that perfect stat in front of us, even when we have a perfect piece of data that tells us an incidence rate, we still can't predict when that one percent is going to happen. So not only do we have an incalculable amount of data in front of us when we're making any kind of prediction, but even when we have all of the available data, which is virtually impossible to achieve, we can still make prediction that is errant. In other words, something that is very unlikely to happen, we wouldn't predict it, and yet it could still happen. So we have both systematic errors in prediction, and also occasional errors in our prediction. In other words, on some occasion, even if we had perfect data, we would predict wrongly, simply because that's how incidence works. That's how chance works. So with all of this information, we have to spend a lot of our time making predictions. Basically any choice that we make is likely some kind of prediction. When we choose to do something that leads us down path A versus path B, we are ostensibly making a prediction that path A is better for us in some way. It achieves our goals. Maybe it's an easier path. Maybe it gives us more options. Whatever you're optimizing for, your choice for path A versus path B is made based on some kind of predicted likelihood of a positive outcome. As humans, we have to rely on a constant refinement of our decision making process because we are so bad at prediction. So if we are so bad at prediction, even though we have to do it a lot, what else can we do about this? Of course, these systems that we put in place, whether it's a handful of heuristics, maybe it's an assisted decision making process using some kind of algorithm, there's a lot of ways to make better decisions. But I want to talk about one way that maybe you're not thinking about when you make your next prediction and hopefully it will come to mind. But first, we're going to talk about today's sponsor. Dr. Leparty is grateful for the support of Compiler. Wailer is a brand new podcast answering the hardest, most complicated questions about the tech industry. And one of the most important things about a good podcast is that you have fun while you do it. I had a chance to talk about exactly that with Brent and Angel. I asked them what one of their most enjoyable moments was when recording this season. I think one of my favorite episodes and one of my favorite moments is we have this episode all about superstitions, tech superstitions, and trying to figure out what they are and if they actually work and then how they operate in our lives. Of course, we went to the people who encounter superstitions, probably the most, which is people who work in tech support. So they actually end up becoming our experts in the people we interview and they tell all of these stories about the strange things that they themselves do or do other people do in order to try to get their machines to work. Just like I was... Holy get particular angles in the sun. You got to hit the side of it three times. Right. Not four or two. That won't work. I was just dying that entire episode. It is just so deeply, deeply, deeply funny to me. Thanks so much to Compiler for their support of Developer Tea. You can find Compiler wherever you find podcasts. Now you might be thinking, well, so what? So what if we're bad at prediction? We don't really have a way around it. And if everybody's bad at prediction, then it kind of evens out. Whoever's the best at something that we're all pretty bad at may win the game, so to speak, or might succeed whatever they're trying to do as long as they're persistent. And the sin tuition isn't necessarily wrong. But when I talk for a moment, and this is going to reference our last episode, if you haven't listened to it, it's about giving randomness a chance. The important thing dealing with giving randomness a chance is giving up this tendency to predict. So I'm going to talk for a moment about the effect of this prediction, what it can lead you towards. And then I want to give you a prescription, a challenge the next time you have to make a prediction. But if you're making a prediction, if you're making a decision based on some prediction, then it's very hard, it's very difficult to go against your original prediction. In other words, it's very hard to make a prediction, choose a direction, and quickly adjust and accept that your direction was wrong. And so this buy-in of choosing a direction, it gets even harder the further you predict. In other words, if you're making, this is a very practical example, if you're making very long roadmaps, especially if you have dependency chains that stretch out multiple months. And you've put a lot of work into this. You take a lot of time to try to read all of the data that you do have, even taking into account all of the things that we should take into account, like user feedback. And you try to predict further out into the future. Let's say that it's a six-month road map and you're in month three, and it's very clear that your predictions were wrong. What do you do? Of course, it's easy to answer while we can throw away the predictions and start over. But when you're actually in this particular scenario, when you're facing the idea of going against what you've already done, writing off these last three months, you have so many different things working against you. Well, list a couple of them. Accepting that you failed in your prediction is a socially difficult thing to do. Even though failure in prediction is so common, it still is very difficult to do, even for the most humble engineer or the most humble product manager. Another thing that's working against you is your desire to be continuous. In other words, to have continuity with your previous stated belief. In this case, you've stated the belief for three months that the direction was proper, that you had a good prediction for the future. And now, suddenly, you're having a change, and this change creates some kind of dissonance, mental dissonance between you, maybe dissonance with other people. And this is difficult to accept. The third one, and there are more, but we'll stop at this third one, is sunk cost fallacy. We've already invested these three months into this thing. And so if you were to change directions now, if you were to discard the last three months of this six month plan, that seems like you're wasting that three months worth of work. So there's plenty of things that are working against you. And when you make a prediction like this, especially if you make many chained predictions, each being less likely to happen than the last, you create a very difficult scenario to escape. And knowing that your predictions are likely to be wrong, at least partially wrong, wouldn't it make sense to slow down on the number of predictions that we're making? And of course, if you've done anything related to agile development, you know that this is one of the core tenets that we don't try to create plans that cascade for long periods of time, we don't try to stretch out our roadmaps for an incredible length of time. And yet so many agile teams still do this, so many software development teams, despite understanding the pitfalls of all of this prediction, will still engage in it. We might change the labels that we're using. We might use a handful of mechanisms to trick ourselves into thinking that we're not trying to make predictions, but we are in fact still predicting. So here's what I want you to do. The next time you have the inclination to make a decision for the future, in other words, making a decision based off of what you believe will happen in the future, no matter how confident you are about this, right? And I'll talk about a practical example in a second, but when you have this urge to make a prediction, especially if that prediction involves kind of implying some kind of work, right? In other words, you're going to make a decision to do something about this prediction. I want you to ask yourself, is it necessary to predict this right now? And this is very counterintuitive to what we believe we should do. We think that the further out we can start predicting the more likely we are to succeed. In other words, if we have a two month roadmap, it's not nearly as effective to our brains, at least, than a two year roadmap. We imagine that the two year roadmap gives us a much longer time to prepare and to plan resources, et cetera. So I want you to ask, what is the difference in predicting this now versus predicting it in the future or not predicting it at all? A perfect example of this in a very practical scenario that happens all the time is when you're building, let's say, an API, a public API, maybe it's just a web endpoint and you start trying to decide beforehand what kinds of fields your users are going to want for a given resource. And so you're predicting what your users will want. Let's say that you start out with some basic fields that you have a very high confidence. Maybe they want the title and they want the body. Maybe they want the slug, right? If it's a typical API for blog posts, for example. Now you sit down with your product team and you're talking about how people are likely to use this API and you think, oh, maybe people will want tags. And so you add that field into the response. And then you think, well, if they want tags, maybe they will also want to nest the description of those tags. And so you nest the description of the tags as well. And what you end up doing is you're building software for no one. Now, that's not to say that your guesses are necessarily wrong. Instead, it's to say that you're building something that may never get used or may need significant changes in order to be useful. For example, you might end up finding out that people want to lazy load the tags rather than including them in the blog resource itself. Now don't focus on the technical aspects of what we're saying here. There's a lot of ways to solve what I'm saying differently. That's not the point. Instead the point is when you're trying to predict what your software will be in the future, what the demand on your software will be in the future, you should avoid trying to predict this and build it once you need it. In fact, there's even an acronym. You aren't going to need it, yagni, that basically underscores this principle. So again, going back to this base idea that humans are not very good at predicting. And this isn't just something to throw around whenever you're talking about estimation, to get out of estimating your next feature. Instead it's something that we should take very seriously, both when we're building products and in our personal lives. Do we know what's going to happen in the future? The answer is almost always no. But do we know what's happening now? The closer to now that we can build our software, the closer to now that we can make decisions, the more likely we are to benefit from those predictions. Thanks so much for listening to today's episode of Developer Tea. Thank you again to Compiler. You can find the most recent episode of Compiler on whatever podcasting app you're using. If you're using one to listen to this podcast, for example, check out the latest episode at Ken Superstitians Saul Technical Problems. This is a funny episode and it's also informative. You can find that once again and wherever you listen to podcasts. Thanks so much for listening to this episode. If you enjoyed this one, you'll probably enjoy the other over 1000 episodes that we released of this show. Subscribe and whatever podcasting app you're currently using so you don't miss out on future episodes. So if you enjoy these types of conversations, you can have more of them than you can chat with me and other engineers who listen to this podcast just head over to developer.t.com slash discord. And of course, the discord community is and will remain totally free. Thanks so much for listening and until next time, enjoy your tea.