Mental Models w/ Gabriel Weinberg, CEO of DuckDuckGo (part 2)
Published 5/31/2019
Today's guest, Gabriel Weinberg, the CEO of DuckDuckGo uses connections to help steer the company. What we're talking about today with Gabriel are mental models for building a team and business.
In part 2 of this interview, we dive deeper into Gabriel's mental models specifically for engineers. His book, Super Thinking, which we base the discussion on can be found here: Super Thinking.
Get in touch
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
🧡 Leave a Review
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
🍵 Subscribe to the Tea Break Challenge
This is a daily challenge designed help you become more self-aware and be a better developer so you can have a positive impact on the people around you. Check it out and give it a try at https://www.teabreakchallenge.com/.
🙏 Thanks to today's sponsor: GitPrime
Our sponsor GitPrime published a free book - 20 Patterns to Watch for Engineering Teams - based data from thousands of enterprise engineering teams. It’s an excellent field guide to help debug your development with data.
Go to GitPrime.com/20Patterns to download the book and get a printed copy mailed to you - for free. Check it out at GitPrime.com/20Patterns.
Transcript (Generated by OpenAI Whisper)
In today's episode, we continue our discussion about mental models with Gabriel Weinberg. Gabriel runs DuckDuckGo as the CEO, and he's written a book about mental models called Super Thinking. This is a list of 300 mental models, and we've been going through some of them on this episode. In the last part of the episode, part one, if you haven't listened to it, I encourage you to go and listen to that. It gives you kind of a primer on what mental models are, and then we start going through some that are useful to engineers. We'll continue that discussion, and we'll have a few other questions for Gabriel in today's episode. My name is Jonathan Cottrell. You're listening to Developer Tea, and my goal on the show is to help driven developers like you find clarity, perspective, and purpose in your career. Now let's get straight into the interview with Gabriel Weinberg. I'd love to know, do you have a way of kind of... testing your mappings when you come across a situation and you're trying to map a decision onto a model? How do you validate that mapping? Yeah, I do, but you'll see if you like the answer or not. Because of the biases that we've talked about some, and there are many, and the one that you just mentioned, a bias towards a particular predilection for a certain way of looking at things. I think it's very hard to do yourself. And so the main reliance that I use, and I've also tried to operationalize at DuckDuckGo, is to have multiple people involved. And so a lot of these meetings that we're talking about are actually collaborative meetings, right? Where someone has written down what they think is the right thing to do, which may include literally writing down some named models as part of the thinking. And then other people are questioning those assumptions. We've taken it so far as you mentioned, validate direction. We have three values at DuckDuckGo. One of them is question assumptions, and one of them is validate direction. And so they're like totally built into our processes. And we encourage people to effectively question other people's assumptions. And that can be challenging at times. But that's the way I found to make things work. Now, in my life, at DuckDuckGo, it's all the time. In my personal life, that's generally my wife questioning my assumptions. But I do think you generally need somebody else. I think it's very hard to do alone. So I totally agree with that. That's actually something we've talked about quite a bit on the show. It's one of the things that I believe Ray Dalio talks about in his book Principles, having people that kind of are believable in subjects. So you have a preconceived notion, and you check that notion against the people who are most believable in that particular category. So I... In addition to that, I think that it's really critical that we check these ideas against a diverse group of people, right? So... And when I say diverse, I don't just mean people of different backgrounds. I also mean people with diverse experiences and diverse perspectives, right? And the reasoning for that is like, if you have a lot in common with another person, not only do you have a lot of those kind of surface level things, like you like the same music or you hang out at the same places, but you may also have the same kind of perspectives, and those will shape your biases. So you end up making similar decisions and making similar judgment calls. And so if you have a bunch of people in the same room who look the same, act the same, and have similar experiences in life, then they're probably also going to have similar decision making. Yeah, absolutely. I mean, I put another layer. I put another layer on that as well, which... So I totally agree with that. We have a core objective to go to hire a diverse team in that diversity of thought type of way. But one other thing I've realized is that even if you have a diverse team, if everyone is... Say like you have a company objective or a big project that a bunch of developers are working on, and they've been working on it for a while, they can all get in the same mindset, even if they do have a... You know, they're not going to be the same person. And so I think that's a really good way to kind of get that mindset. Yeah. And I think that's a really good way to get that mindset. And I think that's a really good way to get that mindset. About like that was the right decision. And you often need someone outside that group to kind of be the question assumptions person. And so what we try to do is... We try to do this a number of ways. But one thing is we have all the objectives in the company really report out weekly kind of what's going on. And we do that at project level too. And anyone can follow any project and objective. And people outside the project and objective are encouraged to ask what might be considered stupid questions. Or just other thoughts that they have. And often those from the outsiders are things that really kind of jigger things to the core. Not always, but it's often those outsiders who are asking things that the insiders are just too far down a direction to be able to question anymore. So I'm going to read something from Wikipedia that's exactly relevant to this. I assume you're familiar with the concept of a red team. Yes, yes. And that's exactly what this is. It's an independent group that challenges an organization to improve its effectiveness by assuming an adversarial role or point of view. So that's the very Wikipedia formalized version of this. But the idea is useful. I believe it's been used in military groups. It's been used certainly in journalism. So where somebody who has not been involved on the actual progress of that reporting, they will come in and try to tear the story apart before it goes out. Exactly. And there's a reason why those cliches, you know, fresh pair of eyes and things like that are true, you know, because it really is a fresh perspective that is required in some of these cases. Yes, it's kind of like you have this kind of local sense of diversity and then a more global or long-running sense of diversity, and both are important. Exactly. So I'd love for you to share. I know you have a list of these that you think are particularly relevant to developers. I'd love for you to share another one of those, perhaps one that's not as intuitive to us. Maybe it's the opposite of what you might kind of intuitively assume. Yeah, I have a couple. You can tell me how counterintuitive they are. So one that I think is in practice not very intuitive is the concept of path dependence. And what this is means. You make little decisions all the time, and you may not realize that those decisions may have cascading effects that really constrain your behavior further on. And for example, in a developer context, that might be a quick choice to use a tool or a library, which you didn't fully evaluate as like the best tool or library for the job. And then all of a sudden, you know, a month into the project or sometime later in the project, you're running into trouble. But that library. Is now so embedded in your code that it would take a lot of effort to, you know, to strip out or the tool in your infrastructure or, you know, a canonical example of the reason the company for developers is, you know, maybe really early on in the company, you know, someone didn't think too hard about, you know, what the bug reporting software is we're going to use. And then all of a sudden, you know, we have 5000 bugs in it and we don't want to switch systems, even though it's a suboptimal system. And so, you know, thinking about that from that mental model in mind, you want to kind of check those decisions a little bit harder and think, are these having a path to send pendants problem or not? And the opposite model is preserving optionality. Whereas if there's a choice where, you know, you're not really committing to something fully, that might be the better choice at the moment. Now that can also have a cost. So you have to weigh that. But yeah. So tell me, was that counterintuitive? Yeah, I think it's not necessarily intuitive that a simple decision today could have, you know, cascading effects into the future. I knew that on the other hand, you have developers who will spend a lot of time trying to analyze what is the perfect choice. And the second model that you mentioned, this preserving optionality may actually be a better use of their time. So perhaps you can make, for example, just a concrete example, instead of, you know, trying to arduously determine which particular code package you want to use, maybe you make an adapter so that you can switch those out in the future, right? And so that would be a good use of your time and energy and will likely pay dividends in the future. And it's a fairly small investment. Exactly. A couple related to that. It's just the model of analysis paralysis, which can happen to developers. And so, you know, I think that's a good example of a model of analysis paralysis where, yeah, they just go way too deep into something that doesn't necessarily matter, where they've already kind of reached diminishing returns on a decision. Yeah. And so now that one's probably quite intuitive for a lot of us. That one's more intuitive. I mean, so I got two more if you're if you're up for them. Yeah, let's go. So one that is, is, I think, very counterintuitive to people. We have a whole chapter on basically the statistics models that we don't necessarily need to know. And we try to stay away from the equations. You don't necessarily need to know all the underlying math. But we really think that developers and everyone should know kind of what the concept of statistical significance and how it's used, like in A-B testing, really means so that when you're part of a project that is using those techniques, you can really appreciate the numbers and the decisions that are coming out of that. And I won't get into the full explanation here because that would be that would take a while. But I think that concept is one that people really should take the time to understand. And I think it is I think a lot of people, especially I see this a lot in our company, especially developers can get scared of it because it feels very mathy and statistics. Maybe they didn't take that. Maybe they felt that it was too difficult, say, in having to do that. But I truly believe there is a way to understand it that anyone can understand. And we did try to write that in the book. But I think it's worth taking time to to understand that concept. Yeah, I'm going to share a personal story here because I think it's relevant to this discussion on statistics. And another really kind of deep dive discussion on how statistics can relate to developing beliefs, for example. Right. And I think that's what I was going to say to Annie Duke about a similar similar topic. The idea that, you know, we have these beliefs that we develop over time and we typically kind of our brain tries to make those beliefs binary. So we either do or don't believe it. We don't have a continuous scale of belief. And her kind of message to the world is look at your beliefs more like bets. So, you know, how much how much would you bet on that? And it kind of breaks your brain out of it. It kind of breaks your brain out of that that binary creation. So the personal story. My wife and I are expecting our second child. Congratulations. Thank you. And she recently has had this kind of odd said odd symptom where her hands and her feet are itching. And so, you know, it's it's summer. It's probably allergies or something. It's hot. There's so many things that go on during pregnancy in her body. And so, you know, it wouldn't be surprising. It wouldn't be surprising if there are some erroneous reasons why she's her hands and her feet are itching. And so we go to the doctor. And of course, we also have checked online to see, you know, what could be causing this. And one of the main things that might cause this, although it's still quite unlikely, is called cholestasis. And cholestasis is essentially an issue that happens both when you're pregnant and when you're not pregnant. But there's a specific kind that happens when you're pregnant. And so we have a test done and we're actually still waiting on the results. And I assume that they're going to come back negative for this cholestasis. And so we were discussing the the possible outcomes. And my wife has done a little bit of research on cholestasis. And she says to me, you know, it's really likely that we're going to end up in the NICU. Right. And I said, is it is it likely or is it more likely? And this was a moment where we were talking about statistics, but we were experiencing it in a very personal way. And so this idea that we should expect to end up at the NICU versus it's a little bit more likely than it was, but it's still incredibly unlikely. Right. And so we still have this this kind of statistically, we shouldn't believe that we're going to end up in the NICU. Right. But because of a lot of factors which we won't dive into, it's easy to see the more likely and replace it with likely. Yeah, that's a great example. I hope everything worked out. Yeah. Well, it's the the risk of complication is fairly low. And, you know, I'm going to assume that things and rationally, I should assume that things will will turn out just fine. Well, I have one more for you. I feel like. Yeah. So it's a it's really a set of three models I think would be useful for our developers to internalize. And you might have talked about this before on an episode, but it's the idea of deliberate practice, which came from a gentleman, Anders Ericsson, who spent a career kind of studying experts, world class performers and athletes and intellectuals of different types of musicians. And kind of how they got to be experts. And he identified this process, which he calls deliberate practice as kind of the best way to move up a learning curve on really anything. And the process is pretty simple. It really involves kind of going to the edge of your competence right outside of your comfort zone and working on a specific skill that along the direction that you want to improve and then getting real time feedback. Yeah. And then getting real time feedback from an expert who can help you coach you effectively or mentor you on kind of what you're doing wrong. And it is sounds very straightforward, but it's actually pretty hard to do in practice, in part because you're failing a lot. And that's kind of hard to, you know, internalize. And so the two other models related to that are this thing called the Dunning-Kruger effect, which was studied by these people named Dunning and Kruger. And what they graphed was kind of how people feel as they're moving across this learning curve. And what they discovered is when you start out, you make a lot of progress on the skill almost immediately and you feel really good about it, which is great. But then you over project your confidence on the skill and you think you're way more of an expert than you are at it. And then when you realize that you're not, whether that's pointed out to you or for some other reason, you figure it out, your confidence plummets and you way overcompensate on the negative direction. And you're in this kind of trough of like really underconfidence. And that is this third mental model called imposter syndrome, where you may feel that, especially when you're talking to experts who are farther up the curve, that you're an imposter and you don't belong. Even working on this kind of skill. Yeah. But that's not true. Obviously, you're actually pretty farther along than the beginners. And so this method of deliberate practice is really a great thing if you're trying to improve. But then you also have to be really wary of these like psychological trigger models that you don't fall into. So if you're on the side doing the skill, you kind of want to be aware of that. And then if you're a mentor on the other side, you want to be aware. Help people. Help people go through this process. But be kind of understand that they can fall prey to these other models. Yeah, absolutely. We actually did an episode on imposter syndrome for the senior developer. It's actually something that is more common than you might expect. And I'm sure you actually know this. And we discussed the idea that, you know, a lot of our feeling. If you imagine getting in a car. And pressing on the accelerator. That initial jolt going from standstill to 10 miles an hour is going to feel like you're progressing quite a bit more than if you were to be steady at 60 or 70 miles an hour. And so for a lot of senior developers, because they're not learning at the pace that they used to, it may feel like, well, I've stagnated. But most senior developers are still cruising along at a high capacity. Right? They're the ones that are on cruise control at 60 or 70 miles an hour. And just because they aren't feeling that momentum, or I guess that acceleration, it can seem like, you know, things are not progressing at all. Yeah, one survey that we uncovered as part of the research showed that, you know, across a wide variety of industries, about 70% of people felt that they were not progressing at all. And that's because they were not getting the same amount of information that they were getting at the start of their career. And so that's one of the reasons that people felt that they had imposter syndrome at least one point in their career. So it's very, extremely widespread. The other 30% were probably not telling the truth. Yeah, yeah, yeah, exactly. Yeah, it's probably everybody at some point. So these models, you have a lot more in the book. And, you know, really getting a hold of a wide variety of these. And I would also add that, you know, if you're listening to this episode, you're going to be hearing a lot of stuff. Yeah. At the end of the episode, right now, another thing that has been really useful for me is to take models from other domains and other things like hobbies that I participate in. So, for example, music, there are a lot of mental models that can come from music. One of them, as a quick example, the tonal scale has 13 notes, if you count, you know, the beginning of the octave and the end of the octave, and you can start at any point on that scale and and move from the starting position to the last position. and the end of the octave. And you can start at any point on that scale and move through those notes at the same, I guess, distance between each note. And you can translate what's called transposing music from one key to another. There's nothing special about a given key as far as whether or not you can transpose that music over to another key. They all kind of mathematically, they just shift. And so this is a model of thinking. If I can create software that is similar, if I can somehow find a way to modularize what I'm using so that I can shift it from one project to another, it's very similar in terms of, okay, it may sound different. The outcome may be a little bit different, but that underlying model of transposing, transposability is applicable. So I'd love to know, do you find that these outside practices that we have, hobbies, interests, maybe even cross industry, that those are useful places to find models? Yeah, I mean, absolutely. I mean, that's effectively the premise too of writing the book. A lot of these models, and we covered some, but we didn't cover a lot of the ones from certain disciplines. You know, come from economics and chemistry, like catalysts and activation energy or other physics ones. We covered critical mass, but there's a bunch of others, inertia and things like that, that are widely applicable. And I think, you know, those are the ones I can easily enumerate because they're coming from major disciplines. But if you're working and you have a good sense of models from your hobbies, and you see how they metaphorically help you, then you're probably going to find a way to do that. Yeah. And I think that's exactly the point. And it's helping you because you've internalized music because you've done it for so long. They mean they're wired in your brain, right? To see that way. And so now you can use that as a shortcut for all these other areas of your life. And I think that's exactly the point is you can do that. And you don't want to just segment all of your knowledge and experience from music into the music part of your life. You can use those things that you learned and you work so hard on. And I think that's exactly the point. And I think that's exactly the point. You can use those things that you learned and you work so hard on. And I think that's from the book, we talked about technical debt at the beginning that are actually from development that are really useful outside development. So I wrote down premature optimization, brute force algorithms, divide and conquer algorithms, the MVP type of concept, which I guess is more product, but also can apply development. Those are all very useful outside of the, you know, development product world as well. Yeah, I very, very regularly use divide and conquer search algorithms for socks in my drawer. It's very strange, but it turns out it actually works in sorting algorithms. Yeah, I mean, that's probably what you mean by the sock thing, to find stuff in my house. You know, I can't find something. Yeah, if you put, for example, if you put things that are similarly sized into the same buckets, you're kind of doing a bucket sort, right? And it's literal, but it turns out that, you know, your mind can actually grasp the size of something. A little bit better than it can grasp other aspects of it. And so it's easier to find something if you know where the size, you know, similarly sized things are. Today's episode is sponsored by Git Prime. Have you ever noticed that the best engineering managers also happen to be the ones that debug problems really well? Part of the reason for this is that, well, engineering managers are using mental models, like what we're talking about in today's episode, to approach problems. It's not just about code. It's about the systems. Git Prime has written and published a book about patterns that you find on successful engineering teams. Go and check it out. It's at gitprime.com slash 20 patterns. That's two zero and then the word patterns. That book is entirely free. And if you go to that link, you can actually get a physical copy delivered to you as well. Also free. Head over to gitprime.com. That's G-I-T-P-R-I-M-E dot com slash 20. patterns. That's two zero and then the word patterns. Thanks again to Git Prime for sponsoring today's episode. So, Gabriel, I know we're kind of running up at the end of this episode, and I've enjoyed every moment of it. I do have a couple of questions, and these may open up into larger discussions that maybe we can have another time. But the first one, we've talked about DuckDuckGo a little bit. I'd love to know, you've been doing this for a little over 10 years, and I'm sure you've done this for a little over 10 years now. And if you could go back and kind of give yourself that 2008 or even 2007 pre-DuckDuckGo version of you, if you could give yourself one kind of quick lecture or piece of advice or picture of the future, what would you take back to that? It's interesting. There's probably several answers to that. But let me take it from a couple of different framings. In terms of... Project success and things like that. At the beginning, we really didn't have, and it was just me at the beginning, as much of these kind of mental models we've been discussing operationalized inside the process of deciding what to work on. And for the first many years, we worked on a lot of stuff that turned out to not be the right direction. Sometimes you got to do that, right? You got to take a real-time look at it. You got to take a real-time look at it. And sometimes you get to do that. And sometimes you get to do that. And sometimes you risks and you make experiments and sometimes they fail. But we went way beyond that, building whole huge features and even kind of products that we could have validated were incorrect and de-risked that as another mental model way earlier. So one advice I'd give is probably like, if I could give the blueprint of some of these things of how we operate now with those templates and objectives and really those forcing functions to question, what we're doing, I think that's probably the single biggest thing I could do out of anything. Of course, it would be prescient to want to know the future. That's probably the silly answer. Yeah. Assuming that giving you the future wouldn't change it. Yeah, exactly. But I think that's probably the real answer is, if the future is still uncertain and we operate in a very fast-moving and most developers operate in a very fast-moving technological industry where a lot of thing is uncertain, you want to operate in a way that you can be very nimble and figure out what's going on through experimentation very quickly. And I think we weren't, or I wasn't as agile when I was starting as I could be. Yeah. It's really important to think about these these models and I know at this point, we've said the word models thousands of times, they really are they're like a map, and it's such an interesting concept because it's not really a specific map, it's more like navigating skills, you can think about it that way. So, I have two more very quick questions for you. The first one is when they like to ask all my what is one topic of discussion that you wish more people would ask you about i really don't have a great answer to that uh you know there are other things i'm interested in that i don't get to talk about a lot you know but i'm also not um like the world's expert at it and so i don't know if i deserve to talk about it at this point um but i like to talk about these subjects and um you know some of the things that are currently fascinating me are actually like a developer topic around evolutionary algorithms and um a policy topic around why things cost so much um called i think it's brumal's cost disease of like education and um health care and infrastructure at least in america has just the costs have gone on up and up without much to show you for it uh and no one really knows why and so i'm super interested in that but you probably shouldn't ask me about those things because i don't know the answer well it's you know talking about a subject i think you mentioned something kind of interesting that uh you know you don't deserve to talk about it but i think it's um you know one of the things that i think developers often get wrong um actually relates directly to that it's the idea that everything you do must necessarily be to some professional end and uh i know that you don't necessarily agree with that but uh i do think i do think that you should have the opportunity to talk about that thank you well i definitely it's definitely going to research at all i mean these these are kind of on the hobby side and then ultimately they turn into the professional side if it gets if i get deep enough into it you know yeah well i think i think you know going back to what we've been discussing this whole episode it's you really have the ability to take you and others who who study models you can think about these things and think about them thoroughly um and engage almost any topic of discussion and start to get your hands around it that's a key lesson so i mean it's um it's one that i'd love to underscore and we wrote in the book and i really believe is that you know with the power of models but also just the power of um just people are good at learning things um i think people have a lot of power in their lives and i think people end up having especially after they have a career for a while a very static view of their abilities but in reality you could really become an expert using deliberate practice or other things that really anything um if you just spent enough time you know researching and practicing um and so i definitely believe that you know certainly if i put effort into these topics um i could be back here in a couple years being an expert at it for you um it really it's really putting it in the effort yeah and nobody gives the expert badge out anyway right so um most of the time expert is is one of those kind of soft terms that we self-apply or uh that ends up being applied and uh a lot of it is just about learning and spending time with the subject exactly well gabriel i have one last question for you and uh this is i think i might be able to predict the answer but we'll see if you could give developers who are listening to this show regardless of their experience level just 30 seconds of advice what would you tell them i'm curious what you predicted i think my advice would be to figure out i mean i think i'd start with what is that north star um and figure out what it is you actually really want to do um like we have a lot of people now or conduct at go and that's a core question that we try to determine for people because you know every kind of you know some people don't have any ability necessarily to choose their projects but there's often a wiggle room of kind of what exactly you work on and even what job you choose um and if you have that north star and you know where you want to be whether that's i want to be a generalist or i want to be a specialist in this subfield or um i don't know i don't know i don't know i don't know i don't know i don't know i don't know i don't know i don't know i don't know i don't know i don't know i don't know i really like working on this type of thing and that makes me happy and you know that you can really make yourself a lot happier in life um and if you don't have that north star to really answer that question you can just really feel adrift and um so my advice is probably that which really is not just for developers really for everybody yeah my prediction uh is that you would say to be deliberate and um and and kind of rather than just trying whatever random thing comes along uh that deliberate you know whether whether it's deliberate practice deliberate thinking um really deciding um is is the critical skill and what you're saying about having a north star i think is is kind of step one of being deliberate exactly i mean i i agree with that i mean everything that you know we try to do and i try to do and writing down here is yeah another word for that would be being intentional right um and and critically thinking about whatever it is you're doing i'm really engaging the topic fully yeah gabriel this has been an excellent conversation thank you so much i'd love to know uh this book comes out on june 18th correct and people can find out on amazon you can pre-order it now i believe yes you can there's more info at superthinking.com and if you are not an amazon fan there are other ways to pre-order it um but you can pre-order it on amazon and you can pre-order it on amazon you're welcome to use amazon as well excellent thank you so much gabriel thank you thank you so much for listening to today's episode of developer t my interview with gabriel weinberg make sure that you go back and listen to part one if you haven't already and then subscribe if you enjoyed this episode there are more episodes just like this one coming out soon we publish three episodes of this podcast a week so if you don't want to fall behind go ahead and subscribe and then listen to the ones that stand out to you you don't have to listen to them all you can listen to every episode of this show you know it's not a serial kind of show it's not one where we have ongoing storylines the only time that we actually connect one episode to another is when we're doing a series or if we have a guest on the show so you can definitely listen to one episode at a time there's no uh you know no pressure to listen to all of these thank you again to get prime for sponsoring today's episode head over to getprime.com slash 20 patterns that's all one word with the numbers two zero that's getprime.com slash 20 patterns you're going to find a field guide to help you recognize achievement spot bottlenecks and debug your development process with data thank you so much to gabriel weinberg for joining me on today's episode go and check out superthinking.com that's where you can find his brand new book which comes out on june 18th of this year thank you so much for listening to today's episode this episode wouldn't be possible without spec network sarah jackson is the producer for the show my name is gabriel weinberg and i'll see you next time on getprime.com and until next time enjoy your tea