ยซ All Episodes

Ambiguous Target Decisions and Noise

Published 4/6/2022

In today's episode we discuss two types of decision evaluations, and a brief explanation of noise in decisionmaking.

You're making judgment calls every day - are they good? How do you know?

๐Ÿ™ Today's Episode is Brought To you by: LaunchDarkly

This episode is brought to you by LaunchDarkly. LaunchDarkly enables development and operations teams to deploy code at any time, even if a feature isn't ready to be released to users. Innovate faster, deploy fearlessly, and make each release a masterpiece. Get started at LaunchDarkly.com today!

๐Ÿ“ฎ Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

๐Ÿ“ฎ Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

๐Ÿงก Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)
What does it mean to make a good decision? How do we know? How do we know when we've made a good decision? There are two basic categories of ways of understanding in retrospect how good of a decision you've made. The first is obvious. The judgment call that you make has a clear outcome. You make some kind of judgment and then you measure your judgment against the outcome. Now to be clear, you need a sufficient number of identical outcomes to determine whether your judgment call is actually a good one. In other words, if you say that there's a 30% chance of rain and so you don't think it will, but it might and then it rains, was that a good judgment call or a bad judgment call? This is a difficult thing to answer with just this one sample. In other words, you need a lot of opportunities to make these kinds of predictions and then weigh the actual evidence, the actual occurrences against whatever your predictions are. How often are you correct? Well, what about situations where judgment is less clear? This is a different category. I'm going to talk about that. Right after we talk about today's sponsor, we're sure. The number two is supported by LaunchDarkly. LaunchDarkly is feature management for the modern enterprise, fundamentally changing how you deliver software and here is how it works. LaunchDarkly enables development and operations teams to deploy their code at any time. Even if the feature isn't ready to be released to users, even if you are still testing things. You want to test it in production. You want to test it not in production. It doesn't really matter. You can wrap your code with feature flags, giving you the safety to test your features wherever you want to and test your infrastructure in your production environments without impacting the wrong end users. When you're ready to release more widely, update the flag status and the changes are made instantaneously by their real-time streaming architecture. This is how you actually launch your features. Innovate faster, deploy fearlessly, make each release a masterpiece with LaunchDarkly. And over to launchdarkly.com to get started today. There are two kinds of evaluation methods for understanding whether you've made a good decision. The first one becomes obvious. You can measure the accuracy of your judgments. You can measure the accuracy of your decisions, the outputs of your decisions. Things are harder to measure than others. Certainly the quality of these determinations becomes better when you have more discrete categories rather than percentage categories. In our previous example, is it going to rain? There's a 30% chance of rain. Or even better if you always answered that there's a 50% chance of rain it may or it may not, then you're always correct. So there's a much higher quality when you have more discrete situations like 0% or 100% of a chance of rain. For software engineers or for managers who are hiring software engineers, these kinds of predictions might be more along the lines of, is this person going to be able to successfully work well on our team? If you're in a hiring position or is this feature going to convert more users? That's a product manager question possibly. These kinds of judgments are a little bit more in line with this categorical outlook that we can say yes this relatively has succeeded or this has not succeeded. But the other kind of judgments, the other kind of decisions where we can look back and try to understand if we make a good decision or not, do not have a measurable target. In other words, if somebody asks you which team is going to win the game, the target is no-wipple. Once the game is over, a team has won. And your guess of which team has won either is correct or it's incorrect. You can measure against a known specific target. But many of our decisions don't have a known or specific target. For example, if you are once again in the position of hiring someone, you may be wondering how much should I offer this person in order to provide them a fair salary? Now, there's a lot of information that you can use and in fact this is how we do judge the quality of this decision. Maybe you use market information, maybe you use their past experiences to inform where they land on some kind of spectrum of experience. Maybe you use other engineers within the company that are already working at that same capacity. You use information to make this decision. But who is to say what ultimately is fair or unfair? It's very possible in this happens all the time that you offer, let's say, a salary that's well under what the market would suggest is a fair salary and the engineer accepts the offer. And simultaneously, it also happens that sometimes you're going to offer well over what the market has outlined for you to offer and the engineer rejects the offer. In both of these scenarios, how can you determine if your offer was fair? It's tempting to believe that the acceptance of the offer reflects whether or not it's fair. But the truth is that's a substitute. We're substituting the acceptance for fairness. And this relies on the assumption that the engineer that you're making the offer to is rationally acting and has some information that you don't have that they can judge the fairness of an offer given to them. But what exactly is fair? This isn't a clear target. In other words, there's no person, no specific standard that you can absolutely logically speaking determine whether an offer was fair. Now, I want to be really clear about what I'm saying here. That doesn't mean that there is no such thing as a fair offer. Instead, what it means is that we can't use pure logic to decide whether an offer was fair or not. Instead, we can look at the quality of the decision, the quality of the offer based on the information that you used. We can come up with what we believe is a reasonable definition of fairness and reduce the variation in the offer that we would give. In other words, if you have the right kinds of information that you're using in order to make this offer, then theoretically another person who isn't you, who uses the same information, should have a very close offer to what you have given. This reduction in variability is a reduction in something that Danny Coniman in his new book calls noise. We've talked a lot about Danny Coniman before, particularly we've discussed his most well-known book, Thinking Fast and Slow. This new book, or at least relatively new, discusses noise. This is the variation in a given judgment. This kind of variation is different from bias in the way that variation doesn't really go in a specific direction. In other words, if you have a bias, you might see consistent overpayment or consistent underpayment. Within your company or to a particular group of people, with an understanding that relative to some ostensibly fair number, your company tends to overpay, or your company tends to overpay a particular group of people. Maybe you have a bias towards a group of people. Noise, on the other hand, would present as variation around a point. So in other words, on average, you might say that your offers are fair, but your individual offers, let's say you as a hiring manager tend to offer well above the average and another hiring manager offers well below the average. So for the same person, you might have two different offers depending on who's producing the offer. This kind of variation is called noise. We talk about these two kinds of judgment because most often, we can observe error in the first kind of judgment, and it's much harder to observe error in the second kind of judgment. This is partially because we believe that the second kind of judgment, the kind that doesn't really have a specific target, that we as humans are responsible for some kind of intuitive guessing and that kind of gut feeling using our personal judgment in order to make decisions. Surprisingly, people tend to act as if there is an ideal target in these otherwise ambiguous judgment calls. We believe that, for example, there is a specific number that would be the optimal fair offer to provide to a given candidate. We're likely to talk about noise in upcoming episodes. This is certainly something that's on my mind, but for now, as you encounter these different kinds of judgment calls, I want you to ask yourself, do I have a way to measure the outcome of this judgment? How am I measuring the outcome of this judgment? Am I accidentally and erroneously substituting a different outcome and calling it whatever this thing is, in the case of hiring, am I substituting the individual accepting the offer for deciding whether or not the offer was actually a good one? And then secondly, when you do realize that you have situations where you don't have a specific measurable target to determine how good of a judgment call you're making, I want you to ask yourself, what information am I using? What kind of decision process am I employing in order to make this decision? In future episodes, we might talk about employing a better decision process. That's an entire science on its own. We've talked about it quite a bit on this show before, and there's plenty more to talk about. For now, thanks so much for listening. And a huge thank you again to today's sponsor, Launch Darkly. If you want to get started with Enterprise Great, feature flags that will let you release your code in peace and test your code in production. Go and check it out. I have a two launch darkly dot com to get started with that today. If you enjoyed this discussion, I'd first encourage you to pick up Danny Connman's book, Noise. It will help you make better decisions. Secondly, join the Developer Tea discord. I have a two Developer Tea com slash discord. I'm going to give you a little offer here, if you will commit to reading the book, if you'll commit to reading either noise or thinking fast and slow, this is a long standing offer that I've had. And if you come and join the discord and prove that you're actually reading the book, then I will buy that book for you. All you got to do is join the discord. Come and let me know that you're committing to reading these books, and I will buy the book for you. Again, the discord is totally free. There's no strings attached to this offer. Other than you have to actually read the book. So join the discord if you are interested in this Developer Tea com slash discord. Thanks so much for listening, and until next time, enjoy your tea.