Your System is Perfectly Designed for Your Current Outcomes
Published 7/3/2025
This episode introduces the potentially controversial principle that your system is perfectly designed for its current outcomes, urging listeners to embrace greater responsibility for systemic issues. It explores how to redefine system boundaries to holistically integrate all influencing factors, like talent and organisational processes, ensuring that interventions are effective and targeted.
- Uncover the principle that your system is perfectly designed for the results you are getting, prompting a re-evaluation of what constitutes a "good" system when outcomes are undesirable.
- Learn why arbitrary system boundaries often lead to critical factors, such as talent, being excluded, and how to consider a system's full scope regardless of traditional lines of responsibility.
- Discover how incorporating talent and other seemingly external factors into your system design can lead to more efficient and effective solutions, rather than simply patching symptoms.
- Explore the distinction between judging decisions by their outcomes (resulting) and designing systems that proactively reduce uncertainty and improve the likelihood of success.
- Understand that system thinking extends beyond technical architecture to encompass processes, policies, culture, and interpersonal dynamics, which collectively influence organisational outcomes.
📮 Ask a Question
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com..
📮 Join the Discord
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!.
🧡 Leave a Review
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
In today's episode, we're going to talk about a potentially controversial principle that I want to share with you. And we're going to frame it specifically for you as you grow in your career. As you become a more senior engineer, as you become a staff level, director level, if you're an IC manager, it doesn't really matter. This is still going to hold true. And it's a little bit controversial because it requires that you take more responsibility for what's happening. All right. And we'll talk about why that's the case. We're going to talk about building systems that work. Now, specifically, when we say systems, in this case, we're not necessarily talking about technical architecture. Okay. You can apply some of these principles to technical architecture, but that's not really the meat of what we're talking about in this episode. Instead, we're talking about the kinds of events, the kinds of policies that you have in your organization, the many different kind of like the multitude of potential reasons why something is behaving the way it is, why some people are behaving the way they are. Right. Like why some resources are being funneled the way they are. Okay. There's, you know, so many possible system effects for you to pay attention to. So when we say system, in this case, we're talking about the more kind of systems design, systems thinking than we are talking about architectural technical systems. So these are processes, for example, that your team is following. These are hiring processes. This is cultural effects. This is infrastructure. This is cultural effects. This is infrastructure. This is interpersonal effects. There's a whole variety of things that you might include when we talk about systems. Okay. So let's make this more concrete with an example. Let's say you're in an organization and you want to examine your quality control system. And specifically, you want to be able to catch bugs before they get released to production. It's a great goal to have, right? We don't want to release bad software. So instead, we want to catch them. We want to deal with them before we release. We want to have a process in place. We want a good system for finding and squashing bugs ahead of time, upstream. We want to prevent these things from going to production. And let's imagine that you've figured out what you think is the perfect system for doing this. Maybe there's some kind of review required, maybe for particularly sensitive things. Maybe there's some kind of review required, maybe for particularly sensitive things. Or for new code. You're going to require some coverage, some kind of automated testing, integration testing. You're going to do all of the kind of industry standard things. Okay. So you write this perfect system. You set it into motion. And then things are not working. And you have a meeting. You and your boss and the whole QA guild or whatever it is that's responsible for trying to make this happen. A bunch of... A bunch of senior engineers maybe in the room. And the outcome that you come away with is that the system is designed perfectly fine. The problem is the talent. We're missing talent. So the reviews that we're putting through, the quality of the review is not very good. And so they're not catching the bugs because they don't have enough experience. And so you all walk away accepting the idea that you've developed and designed a good system. But there's some other problem. Something that you've got to deal with in order for the system to work. Here is the principle that I want you to take away and why this thinking is unequivocally wrong. Okay. You're incorrect about your system being good. Your system is perfectly designed for the results that you are getting. Your system, as it is now. Is perfectly designed for the results you are getting now. What does that mean? What it really means is that your system can't have arbitrary boundaries. Okay. We choose to think about systems with boundaries. But when we're talking about whether our quality control system is working. If we say that the system is good, but the thing we care about is failing. Then our boundary for what is a good system is missing a critical factor. We are struggling to incorporate talent into our system. We imagine that, for example, we have these discrete systems like our QA system and our talent management or our recruiting system. The truth is that if your talent has any impact. If your talent has any impact. If your talent has any impact systematically on your ability to catch bugs, which clearly it does. Then your quality assurance system should take talent into account. And this very simple concept. But so often we. I might even say delude ourselves into believing that our systems are discrete from each other. Why do we do this? Probably the most likely reason is because it allows us to. Assign responsibility more cleanly. So in this case. If if we had that QA guild. As likely a bunch of senior engineers. The senior engineers are broadly speaking, not usually responsible for talent development of the more junior engineers. Right. So if you're going to give some kind of system design. Task to your senior engineers. Like go figure out our QA process. And you tell them develop the system in the best way possible. They're going to look at the things that they are responsible for that they own that they have some agency over. And they're going to consider their system to live kind of live at that boundary. It's going to end at that boundary. Okay. So what do we do about that? First of all. We need to evaluate. As if the responsibility. Is not a factor in our system design. We should consider the system. Regardless of the responsibility or the domain lines or whatever those kind of arbitrary things are. Why do those things exist? We want to make sure that we limit the scope of responsibility for a given person. Otherwise, if we're all responsible for everything, then none of us are responsible for anything. So in this case, what is the diagnostic that we would use? First, we need to correct our thinking about systems. Our system is perfectly designed for the outcomes for the output for what we are getting. Okay. Now I want to kind of put a little bit of a caveat here because you've heard us talk about resulting a lot on this show. If you're not familiar with what resulting is. Judging the quality of a decision. Based on the outcome of that decision. This seems intuitively correct. But it's wrong. Okay. Statistically, most often it's correct. But it's not technically always correct. Why? Because the quality of your decision doesn't know about the outcome. You're trying to make a decision in order to optimize for an outcome. But you don't have certainty. About whether or not. Whether or not. Whether your decision will achieve that outcome. Okay. So in other words. Let's say you had information that. If you were to go route A. You're going to get a 60% chance of a good outcome. Route B is a 40% chance of a good outcome. Any good reasonable thinker. Is going to choose route A. All things being equal. Right. It's a higher chance of success. If. A bad outcome. Occurs. On path A. Which. When we made that decision. Again. We had a 60% chance of having a good outcome. If a bad outcome occurs. Which by the way will happen. 40% of the time. Okay. If a bad outcome occurs. We have a tendency to judge ourselves. Negatively. As if we made the wrong decision. Something about that decision. You know. Because of the outcome. Because of uncertainty. Because of something that we didn't know. We judge ourselves negatively. So very often. What we will do. Is we will try to develop systems. That reduce that uncertainty. So that the decisions that we're making. Have a higher likelihood. Of adhering to what we're trying to get. Right. So. Let's go back to our example. We have. A talent pool of engineers. We want to. To have. A solid ability. To identify bugs. In 100% of cases. Now. If you've been doing this. Career for very long. You know that that's not possible. Even the best engineers. The world's best engineers. Have encountered bugs. That were. Quite literally impossible. To predict. Okay. So. Our systems. We want to develop them in such a way. That we're catching as many bugs as possible. All right. How do we do that? How can we ensure that we're going to catch as many bugs as possible? Well. We've created all of these. Protocols. Through our testing. Through our. Review through all these validations. There's a lot of things that we can do to reduce the risk. But we won't ever get to. 0% risk. However. One of the things that might increase. The risk. Is if our talent pool. Is limited. If we have limited experience. On the team for engineers. Especially for those. Who are reviewing code. Looking for those regressions. So what do we do about that? There's a lot of different strategies. We're not going to go into every strategy. You may, for example. Instead of requiring one review. You may require two reviews. You may require review of. Uh, by someone who has a certain amount of experience. Or has expertise in this particular area. You may require, you know, more rigorous testing. You may require. Uh, maybe you, maybe you change. And this is, this is really where it gets interesting, right? Maybe you change your hiring procedures. Okay. We're, we're talking about avoiding resulting. And developing systems. Uh, that reduce the likelihood that we're going to have a bad. Outcome. That the decision that we're making is going to produce a bad outcome. So the system that we have to catch bugs. If it's failing and we've diagnosed the reason that it's failing as something dealing with talent, then we need to develop our understanding of our system with talent in mind. Now this may include training, right? It might include a different kind of, uh, hiring procedure. We might include. Uh, maybe, maybe we start doing some, some more kind of focused discussions in our teams about testing and sharing some, some common knowledge. Uh, sharing pathways about testing. Uh, maybe we reemphasize the importance of quality. Maybe we create incentives. For people to have higher quality. There are a lot of different things that you could do, but if you're avoiding or, or if you're ignoring the talent aspect. Right. If you're ignoring. This factor. Because it's not part of the system, then you could have this irreducible risk. Right. Or, uh, a very expensive risk. That's hard to reduce without looking at that system or that subsystem of talent. So if instead we, we approach this from as many very like kind of varied locations or vantage points, and we collaborate on our systems so that. We're not drawing arbitrary lines of responsibility where we're actually, you know, creating these subsystems that are not necessarily having the effects that we want. We're trying to accommodate, right. But imagine that we are adding more review steps to accommodate for the fact that we can't change our talent. That's not as efficient of a change, not as efficient of a, um, an intervention in the system as going and actually fixing. The talent pool. Fixing that recruiting process, fixing whatever the thing is. Uh, that's requiring that talent to be, uh, improved that the talent pool to be improved. You could also look at it from another angle. You could say, well, we need to reduce the complexity of our systems such that. The talent that we have can work effectively against it. Once again, you need to be able to introduce this talent. Aspect into your system. Right. Yeah. You need to be able to introduce this talent into your systems thinking in order to make that decision in the first place. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.