ยซ All Episodes

3 Cognitive Pitfalls of Mental Models

Published 12/23/2021

Mental models are very useful, and we tap into them even without knowing it. But just like any cognitive tool, our brain can play tricks on us when using mental models. In this episode we'll talk about three ways we can go wrong with mental models.

๐Ÿ™ Today's Episode is Brought To you by: LaunchDarkly

This episode is brought to you by LaunchDarkly. LaunchDarkly enables development and operations teams to deploy code at any time, even if a feature isn't ready to be released to users. Innovate faster, deploy fearlessly, and make each release a masterpiece. Get started at LaunchDarkly.com today!

๐Ÿ“ฎ Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

๐Ÿ“ฎ Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!

๐Ÿงก Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)
We love to talk about mental models on this podcast. And other podcasts also have picked up on this idea. It's not a new idea, but it's certainly useful and it's particularly useful for software engineers, but there's a catch. Sometimes mental models like almost any other tool can go wrong. They can trick you. And this isn't some insidious trick. This is all well studied and well established science of the brain. And we don't understand the brain completely, but we do know some of the behaviors that we have as human beings. And in today's episode, I want to share three ways that using mental models could go wrong. My name is Jonathan Cutrell. You're listening to Developer Tea. My goal on this show is to help driven developers like you find clarity, perspective, and purpose in their careers. Mental models. What is a mental model? The basic idea of a mental model is to take the system that you apply to one domain and apply the same kind of system to another domain. We use a mental model to understand this kind of a cross-referenced understanding from one domain to another. For example, in software engineering, we use objects-oriented thinking. We imagine that our code is representative of, for example, one paradigm might represent a physical object. Another paradigm might represent a stream of information. The truth is, our code is actually none of these things. Our code is represented as bits on a drive of some kind. And it's executed by other code. But the way that we construct it, we have meta information that we use so that we can construct our code in ways that other humans can read and use that code. We can construct it in a way that we can change it and extend it. All of this is done by using mental models. And the vast majority of the time that we use these, we don't even realize we're doing it. We don't realize that we're using mental models in our daily lives all the time. We're very efficient at this as human beings. And not only are we using these models, we are composing them. We're grabbing from our past experiences, for example, to model our future experiences. A model is a kind of representation of what you might expect given certain circumstances. It's important to understand that mental models are always, always invariably incomplete. And this is important as we talk about how these things can steer us wrong. I want to give you three ways that mental models can steer you wrong. But now that we have a picture of what this tool is, you can start to recognize that this really is beyond just a tool. This is kind of a fundamental function of our brains. We use models to try to understand things. Our brain is building a model of various types of objects from the time that we're born. If we can see an object or if we can hear something, our brain is cataloging that information, creating models that we say, okay, this sounds like or this looks like, this feels like x, y, or z. You can see how this is not only useful to understand, but it's also useful to do this intentionally, rather than just going through our lives every day, kind of unintentionally composing and pulling from these models that we have, kind of naturally stored in our brains. We can intentionally pull models to understand complex scenarios. But when we do this intentionally, and sometimes even when we're doing it unintentionally, but especially when we're doing this as an explicit practice where we're applying a model that we know about, some parts of that can go wrong. That's what we're talking about today. The first thing that can go wrong with your models is an overfit, an overfit. You probably are familiar with this term. If you've ever done any kind of forecasting work, any kind of machine learning modeling and that kind of thing, overfitting means that your model in some instance to closely represents the thing that you're trying to model. How can this be possible? It seems like it would be a great thing to have a model that perfectly fits the thing that we're trying to model. But the truth is that each instance of that thing, let's say that we're going to make it a very simple example that has no actual application in real life. But let's say you're modeling a dog. An overfit on a model for a dog might include the height or the color of the dog. Instead of including a range, it might be explicit about those heights, for example. This causes obvious problems. If you encounter a dog that is one inch taller than that other dog that you paste your model on, then your overfit model is going to classify that second dog as not a dog. Now this matters not just in machine learning, of course it applies there. But it matters in our daily modeling and our problem solving modeling. When we see a problem and we've created a model in our minds that, for example, let's say we're trying to solve a performance issue and we take an experience that we had with a previous performance issue and we try to extrapolate every single detail about that performance issue into all performance issues. Now our model of performance issues, you know, performance problems is going to be overfit. Now here's what's interesting. That model might be so fit to a specific problem that you gain an extraordinary amount of confidence. In fact, it becomes overconfidence, which makes you more attached to the model. You're likely to imagine that the model itself of its overfit that all of the kind of parts of that model are going to extend to all parts of the problem that we're trying to solve as well. So for example, let's say one of your criteria in your model for a performance issue is that you're losing revenue. Well, it's very possible that you have a performance issue where you don't lose revenue, but you've created this over extension based on the overfitting model that all performance issues come with lost revenue and that's not necessarily true. There's another kind of subtle point here about overfitting models, specifically with relation to kind of metaphorical models where you apply a metaphor of some system as a model to kind of drive your principles in a different system entirely. Moving back to our previous idea of object-oriented programming and the object model, we might imagine, for example, that objects grow old over time and that they begin to decay. This may not necessarily line up with our software objects because they don't have the same boundaries or physical constraints that physical objects have. If we overextend our kind of metaphorical model, then we might make decisions that are irrelevant to the reality of the problem that we're trying to solve. We're going to take a quick sponsor break and then I'm going to come back and talk to you about the two other downsides or potential pitfalls of mental models. This episode is brought to you by Launch Darkly. Launch Darkly is helping you create feature management for the modern enterprise, fundamentally changing how you deliver software. Here's how it works. Launch Darkly enables development and operations teams to deploy code at any time, even if a feature isn't ready to be released to users. Rapping code with feature flags gives you the safety to test new features and infrastructure in your production environments without impacting the wrong end users. You're ready to release more widely. You can update the flag and the changes are made instantaneously thanks to their real-time streaming architecture. Here's the reality about feature flags. There's been so many projects that have worked on where it was either cost prohibitive or nearly impossible to actually replicate what was happening on production, whether it's because you can't really replicate the stresses that are put on the production environment or you can't replicate the data because it's sensitive data. There's a lot of tricks that you might be able to pull to make your staging look like production, but at the end of the day, there's going to be something different happening in production than is happening in your staging environments. You can't replicate those one-to-one almost never. Especially for features that you are developing that you're trying to release either to a partial audience or maybe you're just trying to queue those in a production environment without actually releasing to the production environment. You don't have to do crazy weird hours releases where somebody might see the release if they log on in a particular time, but your QA team is having to stay up all hours of the night to finish this testing. That stuff is over. With Launcher Darkly, you can release to just your QA folks or you can release to a beta testing audience or you can release to the wider public with a single flag change. Go and check it out. Head over to launchadarkly.com. You can start it for free today that's launchdarkly.com. Thanks again to launchdarkly for their support of Developer Tea. We're talking about the potential downsides or pitfalls of using mental models, ways that they can bite you or ways more specifically that we misuse or kind of misunderstand how models are supposed to be used. We talked about the first one having an overfit in your model. The second one is resistance changing. This one by the way is kind of related to the first one. Resistance to changing or updating models as time makes them obsolete. We had to overfit in the beginning that made us resistant to changing it, which is kind of a common theme here that we need to be open to changing and adjusting what models we're using. The second one is resistance to changing or updating models are models that we're using as time makes them obsolete. This is very possible that you had a good model applied. At some point in time, you would pick the model. It wasn't overfit. It was generalizable to multiple problem areas that you're solving and the model is working well and performing well. Then something changes. This is happening with the global pandemic, for example. The environment is always changing. Our behaviors or our health official recommendations are going to change with it. We don't necessarily intuitively guess that the behaviors that we had yesterday or last week or last month are no longer good behaviors. In other words, our model that we used to determine what is good behavior in a pandemic a month ago is out of date. The change in the environment has made that model obsolete. The same is true in our professional work and in our personal lives as time goes on. As the environment changes, it doesn't really even require a lot of time necessarily. If something that you are not in control of is changing around you, then using the same model that you used yesterday or in at a previous time, that's obsolete. Now, that is a temptation because the model was good. This is our brain kind of playing tricks on us again to make us believe that the model that we had before is going to continue being good, indefinitely, regardless of the situation that the model is applied under. This is a trap that we can fall into because mental models are so useful, the moment that we get the reward of applying a useful model or doing that for the 10th time, it makes it very difficult to reverse that. It makes it difficult to say this model once served us well, but now we have to retire it. We have to move on to a new model. The third and final pitfall that you might find as you are trying to apply mental models is that composition of models can be difficult, especially when the models don't necessarily align. We've already said that every model that you use is incomplete and this is by definition. If the model was complete, then it would actually become the problem. The model is a representation of some of the principles of the problem and the problem itself might vary or it will vary from the model by some amount. If the problem varies drastically from the model, then the model is not a very good fit for the problem. We're looking for models that are incomplete on purpose. This again avoids the overfitting issue, but if we find a model that is relatively good, then we find another model that explains a different part of the problem. Let's say that model number one or model A explains the first 60 to 70% of the behavior that we're seeing. Then model B explains the last 20 or 30% of the behavior that we're seeing. If we're looking at model A and model B next to each other, they collide. They don't agree with each other. How can this be possible? Again, this is because our models are not complete representations. Perfect example of this is that we might have archetypes of customers for a given product that we have. We might be prone to reject the second or third archetype that we're seeing in our data based on our confidence that the first archetype that we've selected is representative of more of the target audience than it actually is. Let me say this again. The idea that we have multiple types of people that we build a product for may intuitively be difficult to accept. We might have in our minds a single archetype for a person that might use our product. That is a type of model. This archetype is a kind of model, a customer model, but the customer model may not be complete. What we might be tempted to do is instead of trying to compose these conflicting models, customer type A and customer type B, we might try to create a middle ground model, something that is more lossy, more incomplete than model A or B is and tries to strike a balance between model A and model B. The problem being that somewhere in the middle could be a non-customer. Somebody who averages these things out might be the incorrect estimation for a customer altogether. When you are picking your models intentionally, keep in mind that it's okay for two models to be applicable to your scenario, even if those models collide. Even if they disagree with each other, even if the concept of model A doesn't line up with the concept of model B. Ultimately, mental models are still a fundamental tool and don't even have to tell you to keep using them because you naturally will. You could learn more models and apply more cross-domain thinking and benefit from it. I highly recommend that you do that. As you continue learning and applying models, keep in mind that some of these same pitfalls that we find in other techniques and other tools are going to show up here as well. Just because we're applying a model doesn't mean that we've sheltered ourselves from those problems. Instead, we should stay vigilant and willing to accept new models and try things out. We should be willing to experiment and consider the model only as useful as it is helpful. Thanks so much for listening to today's episode of Developer Tea. Thank you again to today. Sponsor, launch darkly. You can get started with Enterprise Grade feature flagging and you can avoid accidentally launching all of the code that you've been working on to a larger audience than you intended. Head over to launchdarkly.com to get started today. If you are enjoying Developer Tea, if you've enjoyed Developer Tea for years and you have held out and you haven't joined our Discord community, now is a great time to do that. Really anytime is a great time to do that. We're not going to pressure you into joining this community. There's no plans ever, not just plans. There's never going to be a charge to join this community. Everyone is welcome if you're listening to this show. Go to developertea.com slash discord to get started today. Thanks so much for listening and until next time, enjoy your tea.