« All Episodes

How We Construct Software - Part Three (Decision Variance)

Published 2/13/2019

Giving someone a broad software problem is a little like asking them to plant a tree. In today's episode, we talk about how different mental effects can cause variance in decision-making for complex and compound decisions.

Thanks to today's sponsor: Sentry

Sentry tells you about errors in your code before your customers have a chance to encounter them.

Not only do we tell you about them, we also give you all the details you’ll need to be able to fix them. You’ll see exactly how many users have been impacted by a bug, the stack trace, the commit that the error was released as part of, the engineer who wrote the line of code that is currently busted, and a lot more.

Give it a try for yourself at Sentry.io

Get in touch

If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

🍵 Subscribe to the Tea Break Challenge

This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/

Transcript (Generated by OpenAI Whisper)
In the last few episodes, we've been discussing the construction of software. How it happens, not in its final forms in code, but rather well before that, how we develop beliefs and models for the world, and how we derive so much of our action by asking implicit questions or answering explicit questions, and even how we trick ourselves by answering questions that aren't being asked. In today's episode, we're going to continue this discussion on how we develop software, and we're going to dive a little bit further down towards the actual writing of the software in today's episode, and specifically the design of the software itself. On a day-to-day basis, how are we choosing how we are going to accomplish whatever we need to accomplish in the software that we're writing? That's what we're talking about in today's episode. My name is Jonathan Cutrell and I'm listening to Developer Tea. This has been a series on the construction of software. My goal in this show is to help driven developers like you connect to your career purpose and help you do better work so you can have a positive influence on the people around you. We like to believe that if you were to hand the same software problem to three developers at a given company, there wouldn't be a lot of variance between how those three developers solve that problem. This may be more true if the problem is extremely narrow-scoped, and if the problem isn't actually framed as a broad problem, but instead as a list of specific kind of leading problems, the kind that result in a specific set of features. But this is very unlikely to take place. The truth is, given sufficiently complex problems, individuals are going to solve them in complexly different ways. I'm part of the reason for this, and perhaps part of the reason for everything we're talking about in this series, is the models that we discussed in the last episode. If you didn't listen to that episode, I encourage you to go back and listen to that, but also do a little bit of research on mental models and beliefs and how we form our beliefs. That's kind of the backdrop for all of these discussions. It should be noted that if you zoom in a little bit, and even if we had similar models and beliefs in the way that we view the software, in the way that we view the world, we may come out with different answers to the same problem. This is often because the problem is articulated in a broad way, and the specifics of that problem can be expressed in very different ways. You can think about it kind of like this. A problem that you receive for a given software project is kind of like being told to go and plant a tree. How do you plant a tree exactly? And what tree do you choose? How long should you be spending planting the tree? Can you plant multiple trees so that if one doesn't grow properly, you have a fallback tree? What kind of soil should you be using? And are you actually just cultivating what nature is already doing because nature also plants trees? At what stage are you planting the tree? Is it at the stage of the seed or the seedling, or perhaps even bigger? These are just some of the questions for a fairly simple problem like planting a tree. And the answers to these questions could lead you down very different paths. Now imagine taking a much more complex problem said. And walking down the line of reasoning for all of the features that would express the solution to that problem. Of course, going from one expression of a solution to another is going to result in variance. This happens even when the same person is solving the same kind of problem two times in a row. We're going to take a quick sponsor break. And then we're going to come back and talk about some of these specific ways that create variance in the way that we make decisions, even if it's the same person making the same kind of decision from one day to the next. Today's episode is sponsored by Cintry. Your code is broken and Cintry is going to help you fix it. Relying on customers to report errors that are in your code, this is kind of treating customers like an off-site QA team and you're not even paying them for that. In fact, they're most likely to leave your products altogether without ever reporting any of those problems in the first place. Ideally, we could solve this ahead of time. We could solve it with great testing, with a really good QA process. But there's no way we're going to cover every scenario. Our tests are not going to be complete. And we're not even going to be able to, for example, simulate the right kinds of load on our application. We can't simulate the real thing entirely. And we can predict all the things that people are going to do with our application either. And until we can predict the future, responding to real events is one of the best strategies for dealing with bugs. You shouldn't just have one weapon in your arsenal in the fight against these bugs. You have to approach it from many different angles. And Century provides you with an excellent angle to approach it from. Century helps you catch bugs before your users see them. You'll get immediate alerts in whatever alert channels that you're already using, like, for example, Slack or push notifications. And you can also get more information about that error. For example, the full stack trace and the commit to the code that is responsible for that error so you can fix it quickly. Go get started at century.io. Thank you again to Century for sponsoring today's episode of Developer Tea. So we're going to talk about some of the ways that our decisions have variant. Even a given person's decisions are going to change from day to day. Now one of the most obvious reasons for this is because we're constantly experiencing new things and so we're constantly learning. We adjust and we grow and we have new skills. And so what we would have done a year ago, hopefully we're going to do something better now. This kind of upward slope in terms of your skill level is something that the industry is fairly respectful of. We have jobs that follow that upward slope and as you gain experience, you're kind of expected to also gain extra skills. So this is a relatively predictable direction in how you may make decisions. Hopefully you make better decisions as you gain experience. But there's other kinds of variants that are less predictable and sometimes less desirable. For example, if you have recently had a conversation where someone you respected claimed that a particular industry practice was bad. Regardless of even their reasoning, if you respect this person, if you hold them in relatively high regard, then this opinion that they hold is going to have an effect on you. And it's going to have an effect on you, especially in close proximity to that discussion. And so when you return to whatever the work is that you're doing after that kind of discussion, you are likely to break away from what you may have even had an affinity for previously. You're likely to push against that specific practice. So in that scenario, you have a lot of volatility. You can have swinging opinions that change based on conversations that you have. Another more long term or closer to a permanent effect that we can observe both as developers and just as humans is the confirmation bias. And there's a lot of other effects and biases. This is a very well studied phenomenon in psychology. But the basic idea is that if you already have a belief, and especially if you have reinforced that belief, and if you have committed to that belief in a somewhat public setting, for example, amongst your coworkers, it is very difficult for you to change that belief and then act on that change and go back on those public commitments. So if you, for example, believed very strongly in one particular direction or a paradigm for solving a given problem, and then you get new information, maybe the problem shifts a little bit, maybe you didn't have all the information up front, or maybe your perspective shifts a little bit. Then you have a new way of thinking about the problem. And your old belief is less in line with what the evidence is showing you. You are likely to do two things. One, reject that evidence and hold on to that old belief, even though you can cognitively separate that it's probably not the best belief to hang on to. And another thing that you'll likely do is seek out people or evidence that support your previously held belief. And this can obviously have major impacts on software development timelines. It can have major impacts on how well you and your team work together. And of course, it's going to have major impacts on your ability to actually solve the problems that are in front of you to solve. The third example, and there are plenty more. So it's very important that you continuously try to learn more about these kinds of things about how we make decisions as developers. But the third phenomenon that I want to discuss is called the possibility effect. And we're talking about some things related to the possibility effect. But the basic idea of the possibility effect is that if something is possible, even if it is extremely improbable, we still see the difference between impossible and possible, no matter how unlikely, as a major difference. The possibility effect is relevant for a number of reasons. Specifically, I went to point out one example, and that's optimization. As developers, we are very drawn to the concept of optimizing our code. And this isn't necessarily a bad thing. We are told to learn about how algorithms perform, for example, we are told to understand how to create an adequately optimum program. And this happens in a bunch of different stages. But very often, developers apply this concept of optimization almost as if it's a moral rule. And so what we end up with is a number of developers working on optimizing code that either doesn't need to be optimized, or they're optimizing the wrong parts of the code, and could stand to gain much better optimization in other places of the code. The reason this is related to the possibility effect, there's a couple of reasons here. One, if a developer sees that there is a route to optimization, very often developers are tempted to take that route, even if there are better ways they could be spending their time, or if it compromises the integrity of the readability of that same piece of code. The other reason this is related to the possibility effect is that developers are often looking at numbers to determine the success of their optimizations. So if we go from, let's say, 20 milliseconds to 11 milliseconds, then this seems like a major jump in performance. The problem is that we are zoomed in on these numbers, and we're only looking at the numbers within their own context. We need to understand what is the optimum number for this piece of code to run at. Rather than just saying we know we can make it faster and faster is unilaterally good, instead we should understand is this code that's going to run once, for example, in optimizing it any further is a waste of energy and resources. There are a variety of biases dealing with calculations and numbers that are worth looking at, and it's not just biases, it's also these kind of psychological effects and phenomenon that cause us to see numbers in distorted ways. I encourage you to go and Google about this, read a little bit about it, because you're going to run into many situations where you're dealing with numbers as a developer, and getting a handle on how to see those numbers more clearly is going to help you in the long run. Thank you so much for listening to today's episode of Developer Tea. I hope you're enjoying the series on how we construct software, kind of the mental processes and the models, and the questions and decisions that we have on our place as developers. I encourage you to continue doing some more digging on the topics that we bring up in these episodes. These are some of my favorite topics that we talk about on the show, and I know that they are going to be valuable to you in your career as well. Thank you again to today's sponsor, Cintry, to get started finding bugs before your user see them. Head over to Cintry.io to sign up today. If you haven't signed up for the T-Break Challenge, I encourage you to head over to tbreakchallenge.com and sign up today. The T-Break Challenge is daily soft skills, exercises that could deliver to your email, go and check it out tbreakchallenge.com. If you haven't seen the other shows on spec.fm, the spec network was created for designers and developers like you who are looking to level up in your career. Go and check it out, spec.fm. Thank you so much for listening, and until next time, enjoy your tea.