We write tests for our code because it gives us another angle to understand the code and ends up being more efficient for most types of projects. It also has an effect on the way we actually write the software, but there can be a backfire of having tests and rules. That's what we're talking about in today's episode of Developer Tea.
Sentry tells you about errors in your code before your customers have a chance to encounter them.
Not only do we tell you about them, we also give you all the details you’ll need to be able to fix them. You’ll see exactly how many users have been impacted by a bug, the stack trace, the commit that the error was released as part of, the engineer who wrote the line of code that is currently busted, and a lot more.
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/.
Transcript (Generated by OpenAI Whisper)
We write tests for our code because it gives us another angle. To understand the code, it gives us a different type of confidence than if we were to manually test. And ultimately, it ends up being more efficient for most types of projects. It also has an effect on the way that we actually write the software, typically good testing, kind of naturally encourages better software design. Your code being testable kind of lends itself to you designing better software. But there can be a backfire of having tests. There can be a backfire of having linting rules or rules of any kind. We're going to talk about this in today's episode. My name is Jonathan Cutrell. You're listening to Developer Tea and my goal on the show is to help driven developers connect better to their career purpose and do better work so you can have a positive influence on the people around you. We've all probably experienced this phenomenon in our work and in our personal lives. The concept is very simple. When we write tests or when we wear a seatbelt, when we do anything as a precaution to avoid some kind of danger, we have a new way of looking at the situation. Our perspective shifts a little bit. We don't add these things naturally to a long list of protections against some danger. Our more natural response is actually kind of the opposite. We tend to be riskier in other areas. This phenomenon is called risk compensation. It's also known as the peltzmann effect. The basic idea is when you engage in some behavior to avoid risk to protect yourself, you're likely to engage in some other behavior that is risky that you otherwise may not have engaged in. The idea is that you're kind of balancing out your risk. For example, a person may be more likely to speed when they're wearing their seatbelt than when they're not wearing their seatbelt. This is not entirely dissimilar to the concept of moral licensing. The concept is simple. You do one good thing to earn the opportunity to do one bad thing. No good and bad are very loose terms, but you can imagine that you eat a salad for lunch and so you feel that eating pizza and ice cream for dinner is okay. Because you've bought yourself that license. This is similar because you're creating this compensation effect where you behave in a particularly good way and then you perceive that you're ahead of the ball so you can slack off a little bit and balance things out. The intuition here is not totally off. If you want to have an average risk mitigation, then if you do something that mitigates your risk to a significant degree, then it's kind of giving you the opportunity to do something that is risky and your risk ends up being average. The intuition is not totally wrong in the sense that if you were to take all of your actions and combine them, then your risk mitigation may be quite similar to if you had never taken the risk mitigating action in the first place and therefore you never engaged in the risky behavior either. So how does this play out with code? How does it play out in our jobs as developers? Of course, it's going to affect you like everyone else in your personal life and we've all done something like this where you make a good decision that's positive for maybe your physical or your mental health and then you follow that up with a not so good decision. And it's important to understand that this is a fairly normal way of behaving. We're not going to behave the same all the time. We're going to have different types of decisions that we make and it's very hard to escape all of these kind of biases, all of these effects. So don't be too hard on yourself. Recognize that you're likely to make that decision. It doesn't mean that it's wise or that it's a good idea but instead it means when you do end up behaving in this particular way, don't judge yourself too harshly. This is a fairly natural and normal response. So that's kind of the first disclaimer. At work, this can get us into really strange situations where we've created a lot of boundaries or guidelines. For example, you may have a style guide or a linting practice or you may hold to a specific standard for your testing, your test coverage. And you end up creating these setups and when you're actually following all of the rules, you may be a little less careful with the code that you write. You may take a little less precaution when, for example, deploying that code to a live environment. And so it's important to understand that when you create these systems that mitigate risk for you, that you may end up having a false sense of confidence that you can take your risky behavior and escape the consequences. So how can we avoid this effect or can we avoid the effect and if we can't, then what can we do to work against it? That's what we're going to talk about right after we talk about today's sponsor, CENTRY. This is a particularly appropriate sponsor for today because we've been talking about risk mitigation and testing and this is a very important thing to realize that you have to have a multifaceted strategy for dealing with errors in your code. Now let's be very clear. Error-free code is nearly impossible because people are writing this code and especially if it's changing over time, every time you change that code, you're opening yourself up to introducing new errors. Now in a perfect world, we could test for all of these cases, right? We can cover every use case and make sure that our code is absolutely airtight, but we can't cover every scenario because humans are pretty bad at writing tests, not just because we are lazy, naturally, and not just because we can't think of every scenario, but also because we can't anticipate the weird ways that people are going to use our software. So that doesn't mean that you don't write tests, but instead it means you can add to the way that you are detecting bugs in your software, not just in tests, not just in manual QA, but also with Century. Century tells you about errors that happen in your application before your users see them. This allows you to address the errors before they cause a major business problem. Century also provides you all of the information that you need to solve the error. You get the full stack trace, you even get the information about the developer who wrote the code that led to that error. So you can go and discuss with them how you may be able to mitigate the problem. Go and check it out head over to century.io to get started today. That's century.io. Thank you again to Century for sponsoring today's episode of Developer Tea. So when we're writing our tests, we have to recognize and not just writing our tests. Remember, engaging in any particular activity that gives us extra confidence, that gives us a greater sense that our code is safe. We need to explicitly understand that this one behavior is only one part of a greater strategy. This one behavior does not mean that the next risky behavior that we engage in won't go poorly. In fact, each behavior that you engage in has its own consequences. So very often those tests that you write have very little to do with the risky behavior that you might engage in, perhaps on another project altogether. Our brains are not extremely sophisticated when it comes to parsing when we should apply a bias or not. This idea is why we can carry a mood from one situation to another one. When we can project our frustrations that we have in our personal life or our frustrations that we have in our work life, on to the other. So this presents kind of a dangerous situation where you're testing and confidence that you have in one project may give you a false sense of risk mitigation on another project. This can happen in our personal life as well. Say for example that you are a manager and you know that having a really positive one on one or having a really good review from the people that you manage, that's a good mark, right? It's a high indicator that your risk is low for any kind of negative event to occur. So you may be tempted to, for example, skip the next one on one. This isn't because you're trying to act in detriment to the people that you manage. Instead this is your brain playing kind of a trick on you, this moving average of just how good are things going. Just where is my risk level? So in order to combat this peltzman effect, a risk compensation behavior that we engage in, there's a few things that we need to be aware of. First of all, the existence of the bias in the first place. Now this doesn't necessarily stand on its own as a way of dealing with the bias. This is something called the bias bias. In fact, when we think that knowing about the bias helps us avoid it and that's not always true. But if we do know that engaging in a behavior that gives us a sense of risk mitigation will likely lead us to believe that we can engage in other types of behavior, then we can kind of design those risk mitigation behaviors and watch for the resulting backlash, the resulting negative behaviors that may occur. So a simple exercise here is to write down a list of I will stills. So I'll give an example. I'm going to cover this particular feature in integration tests. That I will still manually check that this feature works by coming up with a list of I will stills or even a single I will still. You're cognitively short cutting that desire or the immediate reaction to jump over the other things that would take your effort. Your brain is trying to reduce the amount of energy that it spins. Going and testing something with an automated test, may give your brain this sense that you have a license to replace another type of test. Another example of an I will still. No matter what my performance review says, as a manager, I will continue to conduct one on ones at the same interval as before. This also protects against the opposite effect, which is when we see something that seems to indicate a higher level of risk, we may amp up other behaviors that decrease our risk. The reason this can become problematic is the same imbalance reason that the pelsman effect can become problematic and that is going to the extremes on either end. If you go to the extreme on the trying to mitigate risk by having one on ones every other day, now that's an excessive situation. You start having these other types of trade-offs and even though your risk of, for example, a lack of communication, even though that risk goes down, other types of risks may go up. This is especially important to think about. This list of I will stills is particularly important to think about when you are creating a common practice that your teams share. For example, if you are, and we're doing this actually right now at Clearbit, we're implementing some shared practices of linting our code. If you implement these shared practices, this is a perfect time to identify that these practices are not taking the place of other good practices. Enforcing one policy, enforcing some kind of shared practice can backfire and have a net negative effect. This isn't common and so you should still continue to develop shared practices. You should still continue to, for example, have a shared way of linting your code most likely because the benefits usually outweigh the costs. They usually outweigh those risks. In order to gain those maximum benefits out of any of these kinds of changes that you make, any of these improvements and mitigation behaviors that you engage in, it's important to start with that in mind that I will still list. Thank you so much for listening to today's episode of Developer Tea. Thank you again to Century for sponsoring today's episode. One of the ways that you can mitigate risk on your projects is by setting up Century so you can see errors before your users do that saves you time, money, and a lot of frustration head over to cintry.io to get started today. Developer Tea is a part of the spec network. If you haven't checked out the spec network in your developer or designer or somebody who's interested in creating digital products, even if it's not your career, I encourage you to go and check it out. Spec.fm. This is the place for designers and Developer To level up. Today's episode was edited and produced by Sarah Jackson. Sarah does such an excellent job. Thank you so much for listening and until next time, enjoy your tea.