ยซ All Episodes

A Statistical Case for Iteration Over Perfect Preparation

Published 10/18/2021

You are much more likely to succeed with iteration than you are with a perfect first attempt.

This is purely a function of probability, and is even more supported by the idea of progressive improvement.

When you can iterate, it is a better route to higher confidence of success than perfect preparation.

๐Ÿ™ Today's Episode is Brought To you by: Auth0

Auth0 is here to solve your login problems, for good. Auth0 provides simple, secure, and adaptable login for applications and businesses, freeing you up to focus on the problems you are best suited to solve in your product.

You can implement Auth0 in your application in as little as 5 minutes. Head over to auth0.com to get started today!

๐Ÿ“ฎ Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

๐Ÿ“ฎ Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

๐Ÿงก Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)

In today's episode, I want to give you a case for trying something quickly and trying again, rather than painstakingly taking all the time that you can muster to make a perfect attempt. My name is Jonathan Cottrell. You're listening to Developer Tea. My goal on this show is to help driven developers like you find clarity, perspective, and purpose in their careers. Sometimes math, sometimes statistics are simply not intuitive. This is definitely true when things are nonlinear. We've talked a lot about the kind of fallacy, the natural belief that everything follows a linear path on this show and how that's absolutely true. But that's absolutely not the case. It's easy to see why we think this way. Most of the things that we interact with on a physical level are indeed linear. We count in a linear fashion. A lot of the things that we rely on in our lives are linear. But a lot of the systems, the processes in our lives are nonlinear. And that's one of the things we're talking about. In today's episode, a nonlinear system. But because of this fallacy, maybe it's not necessarily a cognitive bias or a fallacy in the formal sense, but it's an error in thinking. Because of this error in the way that we think about linear numbers and linear systems and we replace all of our intuitive understandings, we use these linear systems for them. We have a wrong view of what it means to make an attempt. And I'll paint the picture for you of how we mostly think about what it means to try hard. Let's say that we have a task in front of us. And part of the important aspect, is that we do some pretty in-depth research. And that we figure out all of the different parts of this task. How is this going to affect our users? How will it affect our product? How will other teams integrate with this feature that we're getting ready to build or something like this? So there's a lot of things that we need to learn. And our intuitive approach is to learn everything, everything that we can to increase the likelihood of success on our first shot. Let me say that again. We imagine that our job is to go from a success, let's say you have a 50-50 chance of success, if you were to do no refinement, no research, and nothing beyond your intuitive shot. And for... For the time that you spend researching, you can kind of slowly increase your chances at succeeding on your first shot. But of course, as you round off over 80 or 85%, your increments begin to slow down. This is very much related to a concept that we talked about in a recent episode, where we talked about having backups as being, statistically, a more robust option, rather than having a single, very highly reliable, let's say server, for example. Having multiple relatively reliable servers actually yields a better, statistically, a better outcome. And if you listen to that episode, you might see where this is headed. But the amount of time, that you spend learning and trying to do some research, and trying to do some prediction on what the proper feature set should be, or what the right implementation plan should be, you're incrementally adding to your chances at success. But very rarely do we look at these tasks and consider what the downside of failure would actually be. What is the cost of failure? Now, in some cases, we really do only have one shot at doing something. This is very rare that this is actually true, especially in software engineering. This might be true if you have one down to play at the end of a Super Bowl game. The stakes are very high for that specific play. But in almost every other scenario that I can imagine, especially in our day-to-day jobs, it's unlikely that we're going to be able to do something that we're not good at. So, we're going to have to make sure that we have a basically a high stakes single iteration shot at doing something the right way. We're going to take a quick sponsor break, and then we're going to come back and talk about how this really plays out. What does it mean to iterate? And why is it actually probably a better route, a more valuable route, than trying to do all of your research upfront? So, here's the thing. At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At At all of your time on the login. In fact, you'd rather spend almost all of the time that you were spending on the login on your core product, something that differentiates you from other people. Maybe you don't have the expertise you need to make your login experience seamless, or maybe you are concerned about security and the constantly evolving nature of login. And this build versus buy decision on login is really a no-brainer, especially as you start to scale your application. It's even better if you can solve this before you adopt a bunch of technical debt or hidden security issues with a homegrown login solution. This happens all the time, even when you think you've got your bases covered. Auth0 is here to solve your login problems for good. Auth0 provides simple, secure, and adaptable login for applications and businesses, freeing you up to focus on the problems you're best suited to solve in your business. Auth0 supports virtually every style of login you could want. Social login, multi-factor authentication, single sign-on, passwordless, and much more. You can implement Auth0 in your application in as little as five minutes by heading over to Auth0.com. That's A-U-T-H-0, the number zero, dot com, A-U-T-H-0 dot com. Thanks again to Auth0 for their support of DeveloperTea. Let's imagine that you have figured out a process for building a feature, and you have a pretty good idea that you have about an 80% likelihood of succeeding. Now, I don't know how you would determine that 80%, but let's just imagine that you feel like you have most of the information that you need to take a job. So, let's imagine that you have about a first shot. After evaluating the low downside to failure, you think you can take a first shot and have 80% chance of success on that first shot. Now, if you're actually right, if you're actually right that four out of five times the way that you are going about this particular procedure, the way that you've chosen to implement this feature is going to be successful, I want you to intuitively guess while you're listening to this podcast, maybe you can pause it after I say this and take a shot at guessing how many attempts, how many iterations using this 80% successful approach would it take to achieve a 99% likelihood of success? How many would it take? Now, the answer is easily, found with a little bit of math. The likelihood of failure is just the inverse of that likelihood of success. And so, you have 0.2 or 20% and you multiply 20% by 20% and then you multiply that once again. In other words, 0.2 times 0.2 times 0.2. This figure, the 0.2 basically to the third, right? To the third power is a 0.4% chance of failure. And what this represents is the idea that given three events, each with 80% probability of going well, there is only a 0.4% chance that all of them have gone wrong. In other words, there is a 99.6% chance that all of them have gone wrong. And so, you have a 99.6% chance that one of those went right. Now, this seems crazy because it doesn't seem like there would be enough of an opportunity for each of these to succeed. It's 80% after all, but this is the way the math works. Now, compare the amount of investment that you think you would have to make to ensure success on the first shot to a 99.6% level of confidence. I'm just realizing it's actually 0.8%. So, 99.2% chance. It's not really a big difference, not a meaningful difference for most people who are trying to mitigate risk. Now, what's striking here is that the effect of stacking these attempts has a profound level of influence on your likelihood of success. In other words, trying to mitigate risk is not a big deal. It's not a big deal. It's not a big deal. It's not a big deal. It's not a big deal. It's not a big deal. It's not a big deal. It's not a big deal. Things that have even a 60% chance, a 60% likelihood of succeeding quickly yields a likely success. It's already likely technically if it's 60%. It's more likely than not, but you can begin to have confidence after a very low number of iterations. Now, again, I want you to ask yourself how many iterations would it take for a 60% likelihood, which seems very low. I want you to ask yourself how many iterations would it take for a 60% likelihood, which seems very low, to yield a positive outcome. And the answer is about five attempts at 60%. It yields just under a 99 percentile. A sixth attempt would take you to 99.6% likelihood of one of those attempts succeeding. Now, why are we doing so much statistics on this episode? What we're painting here is a very basic proof or a statistical argument for it. It's a very basic proof or a statistical argument for it. It's a very basic proof or a statistical argument for it. It's a very basic proof or a statistical argument for it. It's a but what we're leaving out is that most of the time, the primary argument for iteration is actually that you're going to improve your process. In other words, you may start at 60%, but because you have a feedback loop, it's very likely that you're going to learn by way of that iteration. So maybe you have a 60% chance and you actually succeed on that. So maybe you have a 60% chance and you actually succeed on that first shot. But if you do fail, the kind of learning gain in this is going to be significant and very important from, once again, purely a statistical level. So let's say we start at 60% and we increase our learning by 10%. And then each iteration, we learn by half as much as the last one. In other words, we go from 60% to 60% and then 70% to 70%, but then 70% to 75%, 75% to 77.5%, et cetera. How many would it take of these learning iterations to get to a likely success with a high certainty, past that 99%? And it turns out this only takes four iterations. If we go from 60% to 70% to 75% to 77.5%. So this is striking and it's important to know that we're going to learn by way of that learning gain. So let's start at 60%. So let's start at 60%. So let's start at 60%. So let's start at 60%. So let's start at 60%. It's important to grasp the weight of this theory. That is, the more attempts that we have, the more the statistics will play to our advantage. Now, of course, this relies on you having a greater than 50% chance of success. But even when you don't have a greater than 50% chance of success, the iterative process, as long as you are paying attention, and you're doing it in a proper way where you have a feedback loop, you're actually trying to improve and learn from whatever it is that caused that last failure, even if you start below that 50%, you can still improve and quickly gain a high level of competence that you're going to succeed. So this is a very reliable recipe for repeated success and a much better approach than trying to get all of your research, all of your ducks in a row, every bit of information you can before even taking your first shot. Thanks so much for listening to today's episode of Developer Tea. Thank you to today's sponsor, Auth0, for their support of this episode. Head over to auth0, that's auth0.com to get started today. Thanks so much for listening to this show. This podcast exists for the audience. It exists for you. That sounds pretty cliche, but it's true. We wouldn't be around if you weren't listening. And it's so important for us to connect with you. There are so many ways you can do that, but one of the most important ways is the Developer Tea Discord. You can talk about the ideas that we've been talking about on this show, ask questions, let me know where I've made errors. I especially get concerned about these statistics episodes that I'm making errors in these. Go and join the Developer Tea Discord. Head over to developertea.com slash discord to get started with that today. It's always free. It's always been free. It will continue to remain free forever. So it's just a community that we're trying to kind of provide a space to talk about these subjects and it's working out really well. Thanks so much for listening. And until next time, enjoy your tea. See you soon.