ยซ All Episodes

3 Cognitive Pitfalls of Mental Models

Published 12/23/2021

Mental models are very useful, and we tap into them even without knowing it. But just like any cognitive tool, our brain can play tricks on us when using mental models. In this episode we'll talk about three ways we can go wrong with mental models.

๐Ÿ™ Today's Episode is Brought To you by: LaunchDarkly

This episode is brought to you by LaunchDarkly. LaunchDarkly enables development and operations teams to deploy code at any time, even if a feature isn't ready to be released to users. Innovate faster, deploy fearlessly, and make each release a masterpiece. Get started at LaunchDarkly.com today!

๐Ÿ“ฎ Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

๐Ÿ“ฎ Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

๐Ÿงก Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)

We love to talk about mental models on this podcast. And other podcasts also have picked up on this idea. It's not a new idea, but it certainly is useful. And it's particularly useful for software engineers. But there's a catch. Sometimes mental models, like almost any other tool, can go wrong. They can trick you. And this isn't some insidious trick. This is all well-studied and well-established science of the brain. And we don't understand the brain completely, but we do know some of the behaviors that we have as human beings. And in today's episode, I want to share three ways that using mental models could go wrong. My name is Jonathan Cottrell. You're listening to Developer Tea. My goal on this show is to help developers. Driven developers like you find clarity, perspective, and purpose in their careers. Mental models. What is a mental model? The basic idea of a mental model is to take the system that you apply to one domain and apply the same kind of system to another domain. All right. We use a mental model. To understand. It's kind of a cross-referenced understanding from one domain to another. For example, in software engineering, we use object-oriented thinking. We imagine that our code is representative of, for example, one paradigm might represent a physical object. Another paradigm might represent a structure. Another paradigm might represent a stream of information. The truth is, our code is actually none of these things. Our code is represented as bits on a drive of some kind. And it's executed by other code. But the way that we construct it, we have meta information that we use so that we can construct our code in ways that other humans, humans, can read and use that code. We can construct it in a way that we can change it and extend it. All of this is done by using mental models. And the vast majority of the time that we use these, we don't even realize we're doing it. We don't realize that we're using mental models in our daily lives all the time. We're very efficient at this as human beings. And not only are we using these models, we are composing them. We're grabbing from our past experiences, for example, to model our future experiences. A model is a kind of representation of what you might expect given certain circumstances. It's important to understand that mental models are always, always, invariably, incomplete. And this is important as we talk about how these things can steer us wrong. I want to give you three ways that mental models can steer you wrong. But now that we have a picture of what this tool is, you can start to recognize that this really is beyond just a tool. This is kind of a fundamental function of our brains. We use models to try to understand things. Right? Our brain is building a model of, you know, various types of objects from the time that we're born. And if we can see an object or if we can hear something, our brain is cataloging that information, creating models that we say, okay, this sounds like or this looks like, this feels like X, Y, or Z. You can see how this is not only useful to understand, but it's also useful to do this intentionally. Rather than just going through our lives every day, kind of unintentionally composing and pulling from these models that we have, uh, kind of naturally stored in our brains, we can intentionally pull models to understand complex scenarios. But when we do this intentionally, and sometimes even when we're doing it unintentionally, but especially when we're doing this, uh, as an explicit practice where we're applying a model that we know about, some parts of that, can go wrong. And that's what we're talking about today. The first thing that can go wrong with your models is an overfit, an overfit. You probably are familiar with this term. Uh, if you've ever done any kind of forecasting work, any kind of, uh, machine learning, uh, modeling and that kind of thing. Overfitting means that your model, uh, in some instance too closely, represents the thing that you're trying to model. How can this be possible? It seems like it would be a great thing to have a model that perfectly fits the thing that we're trying to model. But the truth is that each instance of that thing, let's say that, uh, we're going to make it a very simple example that has no actual application in real life. But let's say you're modeling a dog, uh, an overfit on a model for a dog might include the height or the color. Of the dog, right? Uh, instead of including a range, it might be explicit about those heights. For example, now this causes obvious problems. If you encounter a dog that is one inch taller than that other dog that you paste your model on, then your overfit model is going to classify that second dog as not a dog. Now this matters, not just a machine learning. Of course it applies there, but it also applies to the other dog. So if you're modeling a dog that is one inch taller than that other dog, it matters in our daily modeling and our problem solving modeling. When we see a problem and we've created a model in our minds that, for example, let's say we're trying to solve a performance issue and we take an experience that we had with a previous performance issue and we try to, you know, extrapolate every single detail about that performance issue into all performance issues. Now our model of performance issues, uh, you know, performance problems is going to be overfit. Now here's what's interesting. That model might be so fit to a specific problem that you gain an extraordinary amount of confidence. In fact, it becomes overconfidence, which makes you more attached to the model. We're likely to imagine that the model itself, if it's overfit, that all of the kind of, uh, parts of that model, are going to extend to all parts of the problem that we're trying to solve as well. So for example, let's say one of your criteria in your model for a performance issue is that you're losing revenue. Well, it's very possible that you have a performance issue where you don't lose revenue, but you've created this overextension based on the overfitting model that all performance issues come with lost revenue. And that's not necessarily true. There's another, another kind of subtle point here, about overfitting models, specifically with relation to kind of metaphorical models where you apply a metaphor of some system as a model to kind of drive your principles in a different system entirely. Going back to our previous idea of object oriented programming and the object model, we might imagine, for example, that objects, you know, grow old over time and that they begin to decay. And so we might imagine that, uh, that, that they may not necessarily be the same as objects that we have today. Well, this may not necessarily line up with our software objects because they don't have the same boundaries or physical constraints that physical objects have. And so if we overextend our, our kind of metaphorical model, then we might make decisions that are irrelevant to the reality of the problem that we're trying to solve. We're going to take a quick sponsor break, and then I'm going to come back and talk to you about the two other downsides or downsides of mental models. . This episode is brought to you by launch. Darkly launch. Darkly is helping you create feature management for the modern enterprise and fundamentally changing how you deliver your product. The next thing we're going to talk about is the next step. The next step is to create a new feature management system. The next step is to create a new feature management system. At the end of the day, you may haveensionally replaced your platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform platform LaunchDarkly enables development and operations teams to deploy code at any time, even if a feature isn't ready to be released to users. Wrapping code with feature flags gives you the safety to test new features and infrastructure in your production environments without impacting the wrong end users. When you're ready to release more widely, you can update the flag and the changes are made instantaneously thanks to their real-time streaming architecture. Here's the reality about feature flags. There's been so many projects that I've worked on where it was either cost prohibitive or nearly impossible to actually replicate what was happening on production, whether it's because you can't really replicate the stresses that are put on the production environment or you can't replicate the data because it's sensitive data. There's a lot of tricks that you might be able to pull to make your staging look like production. But at the end of the day, there's going to be something different happening in production than is happening in your staging. You can't replicate those one-to-one, almost never. And so especially for features that you are developing that you're trying to release either to a partial audience or maybe you're just trying to QA those in a production environment without actually releasing to the production environment, you don't have to do crazy weird hours releases where somebody might see the release if they log on at a particular time. But your QA team is having to stay up all hours of the night to finish this testing. That stuff is... It's over. With LaunchDarkly, you can release to just your QA folks or you can release to a beta testing audience or you can release to the wider public with a single flag change. Go and check it out. Head over to launchdarkly.com to get started for free today. That's launchdarkly.com. Thanks again to LaunchDarkly for their support of Developer Team. We're talking about the potential downsides or pitfalls of using mental models, ways that they can bite you or ways more specifically that we misuse or kind of misunderstand how models are supposed to be used. We talked about the first one, having an overfit in your model. The second one is resistance to changing. And this one, by the way, is kind of related to the first one. Resistance to change. Changing or updating models as time makes them obsolete. So we had the overfit in the beginning that made us resistant to changing it, which is kind of a common theme here that we need to be open to changing and adjusting what models we're using. The second one is resistance to changing or updating our models that we're using as time makes them obsolete. This is very possible that you had a good model applied. At some point in time, you had picked a model that wasn't overfit. It was generalizable to multiple kind of problem areas that you're solving and the model is working well and performing well. And then something changes. This is happening with the global pandemic, for example. The environment is always changing. So our behaviors or our health official kind of recommendations are going to change. And we don't necessarily intuitively guess that the behaviors that we had yesterday or last week or last month are no longer good behaviors. In other words, our model that we used to determine what is good behavior in a pandemic a month ago is out of date. And the change in the environment has made that model obsolete. And the same is true. In our professional work and in our personal lives, as time goes on, as the environment changes, it doesn't really even require a lot of time necessarily. If something that you are not in control of is changing around you, then using the same model that you used yesterday or at a previous time that's obsolete now, that is a temptation because the model was good. Right? So this is our blame. Our brain kind of playing tricks on us again to make us believe that the model that we had before is going to continue being good indefinitely, regardless of the situation that the model is applied under. And this is a trap that we can fall into because mental models are so useful. The moment that we kind of get the reward of applying a useful model or doing that for the 10th time, it makes us feel like we're going to be good. And that makes it very difficult to reverse that. Right? It makes it difficult to say this model once served us well, but now we have to retire it. We have to move on to a new model. The third and final pitfall that you might find as you are trying to apply mental models is that composition of models can be difficult, especially when the models don't necessarily align. We've already said that every model that you apply to a model is going to be difficult. Every model that you use is incomplete. And this is by definition. If the model was complete, then it would actually become the problem. The model is a representation of some of the principles of the problem. And the problem itself might vary or it will vary from the model by some amount. If the problem varies, you know, kind of drastically from the model, then the model is not a very good fit for the problem. Right? So we're looking for models that are incomplete on purpose. This again avoids that overfitting issue. But if we find a model that is relatively good, and then we find another model that explains a different part of the problem. Right? Let's say that model number one or model A explains the first 60 to 70% of the behavior. That we're seeing. And then model B explains the, you know, the last 20 or 30% of the behavior that we're seeing. But then if we're looking at model A and model B next to each other, they collide. They don't agree with each other. How can this be possible? Well, again, this is because our models are not complete representations. All right? So a perfect example of this is that we might have archetypes of customers for a given product that we have. Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? Right? idea that we have multiple types of people that we build a product for may intuitively be difficult to accept. We might have in our minds a single archetype for a person that might use our product. And that is a type of model. This archetype is a kind of model, a customer model. But the customer model may not be complete. And so what we might be tempted to do is instead of trying to compose these conflicting models, customer type A and customer type B, we might try to create a middle ground model, something that is more lossy, right? More incomplete than model A or B is and tries to strike a balance between model A and model B. The problem being that somewhere in the middle could be a non-customer, right? Somebody who averages these things out, might be the incorrect estimation for a customer altogether. And so when you are picking your models intentionally, keep in mind that it's okay for two models to be applicable to your scenario, even if those models collide. And even if they disagree with each other, even if the kind of concept of model A doesn't line up with the concept of model B. Ultimately, mental models are still a fundamental tool. I don't even have to tell you to keep using them because you naturally will. And you could learn more models and apply more kind of cross domain thinking and benefit from it. I highly recommend that you do that. But as you continue learning and applying models, keep in mind that some of these same pitfalls that we find in other techniques and other tools are going to show up here as well. And just because we're applying a model doesn't mean that we've kind of sheltered ourselves from those problems. Instead, we should stay vigilant and willing to accept new models and try things out, right? We should be willing to experiment and consider the model only as useful as it is helpful. Thanks so much for listening to today's episode of Developer Tea. Thank you again to today's sponsor, LaunchDarkly. You can get started with enterprise-grade feature flagging, and you can avoid accidentally launching all of the code that you've been working on to a larger audience than you intended. Head over to launchdarkly.com to get started today. Hey, if you are enjoying Developer Tea, if you've enjoyed Developer Tea for years and you have held out and you haven't joined our Discord community, now is a great time to do that. Really, anytime is a great time to do that. We're not going to pressure you into joining. This community, there's no plans ever, not just plans. There's never going to be a charge to join this community. Everyone is welcome. If you're listening to this show, head over to developertea.com slash Discord to get started today. Thanks so much for listening. And until next time, enjoy your tea.