In part 2 of this interview, we dive deeper into Gabriel's mental models specifically for engineers. His book, Super Thinking, which we base the discussion on can be found here: Super Thinking.
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
This is the second part of my interview with Gabriel Weinberg, which we did earlier this year. Another huge thank you to Gabriel for joining me on this show. And thank you for listening to Developer Teain 2019. My name is Jonathan Cutrell and you know that I create this show to help driven developers like you find clarity, perspective, and purpose in your careers. 2019 was a huge year for Developer Tea. And for me personally, I started a new job. My wife and I had our second child. And so we're taking a little bit of time to reflect back on 2019 together. And I hope that you take some time to do the same for yourself. Thank you so much for listening. Now let's get straight into this second part of my interview with Gabriel Weinberg. I'd love to know, do you have a way of kind of testing your mappings when you come across a situation and you're trying to map a decision onto a model? How do you validate that mapping? Yeah, I do, but you know, you'll see if you like the answer or not. Because of the biases that we've talked about some and there are many and the one that you just mentioned, you know, a bias towards a particular predilection for a certain way of looking at things, I think it's very hard to do yourself. And so the main reliance that I use and I've also tried to operationalize a step to go is to have multiple people involved. And so a lot of these meetings that we're talking about are actually collaborative meetings, right? Where, you know, someone has written down what they think is the right, you know, thing to do, which may include literally writing down some name and models as part of the thinking. And then other people are questioning those assumptions. We've taken it so far as the mention of validate direction. We have three values of ductic go. One of them is question assumptions and one of them is validate direction. And so they're like totally built into our processes and we encourage people to effectively question other people's assumptions and that to be challenging at times. But that's the way I found to make things work now. In my life, you know, it's a, yeah, ductic go. It's all the time in my personal life. You know, that's generally my wife questioning my assumptions. But I do think you need, you generally need somebody else. I think it's very hard to do alone. So I totally agree with that. That's actually something we've, we've talked about quite a bit on the show. It's one of the things that I believe Ray Dalio talks about in his book Principles, having people that kind of are believable in subjects. So you have a preconceived notion and you check your, you know, that notion against the people who are most believable in that particular category. So in addition to that, I think that it's really critical that we check these ideas against a diverse group of people, right? So and when I say diverse, I don't just mean, you know, people of different backgrounds. I also mean people with diverse experiences and diverse perspectives, right? And the reasoning for that is like, if you have a lot in common with another person, not only do you have a lot of those kind of surface level things in common, like you, like the same music or, you know, you like the same, you hang out at the same places, but you may also have the, the same kind of perspectives and those will shape your biases. So you end up making similar decisions and making similar judgment calls. And so if you have a bunch of people in the same room who look the same, act the same, and have similar experiences in life, then they're probably also going to have similar decision making. Yeah, absolutely. I mean, I put another layer on that as well, which, so I told you, I totally agree with that we have a core objective in it to go to hire a diverse team for in that diversity of thought type of way. But one other thing I've realized is that, you know, even if you have a diverse team, if everyone is, say like you have a company objective or a big project that a bunch of developers are working on, you know, and they've been working on it for a while, they can all get in the same mindset, even if they do have a degree of diversity about like that was the right decision. And you often, yeah, someone outside that group to kind of be the question assumptions person. And so what we try to do is we try to do this in a number of ways, but one thing is we have every all the objectives in the company really report out weekly kind of what's going on. And we do that at a project level too. And anyone can follow any project and objective. And people outside the project objective are encouraged to ask what might be considered stupid questions or just other, you know, thoughts that they have. And often those from the outsiders are things that really kind of jigger things to the core, you know, not always, but it's often those outsiders who are asking things that the insiders are just too far down a direction to be able to question anymore. So I'm going to read something from Wikipedia that's exactly relevant to this. I assume you're familiar with the concept of a red team. Yes, yes. And that's exactly what this is. It's an independent group that challenges an organization to improve. It's effectiveness by assuming an adversarial role or point of view. So that's the very, very Wikipedia formalized version of this. But the idea is useful. I believe it's been used in military groups. It's been used certainly in journalism. So where somebody who has not been involved on the actual, you know, progress of that reporting, they will come in and try to tear the story apart before it goes out. Exactly. And there's a reason why those clichés, you know, fresh bear eyes and things like that are true, you know, because it really is a fresh perspective that is required in some of these cases. Yes, it's kind of like a, you have this kind of local sense of diversity and then a more global or long running sense of diversity in both are important. Exactly. So I'd love for you to share, you know, if I know you have a list of these that you think are particularly relevant to developers, I'd love for you to share another one of those. Perhaps one that's not as intuitive to us. Maybe it's the opposite of what you might kind of intuitively assume. Yeah, I have, I have a couple. You can tell me how counter-intuitive they are. So one that I think is in practice, not very intuitive is the concept of path dependence. And what this is means is you make little decisions all the time and you may not realize that those decisions may have cascading effects that really constrain your behavior further on. And for example, in a developer context, that might be a quick choice to use a tool or a library, which you didn't fully evaluate as like the best tool or library for the job. And then all of a sudden, you know, a month into the project or later, sometime later in the project, you're running into trouble. But that library is now so embedded in your code that it would take a lot of effort to, you know, to strip out or the tool in your infrastructure or, you know, a canonical example of the reason the company for developers is, you know, maybe really early on in the company, you know, someone didn't think too hard about, you know, what the bug reporting software is we're going to use. And then all of a sudden, you know, we have 5,000 bugs in it and we don't want to switch systems, even though it's a suboptimal system. And so, you know, thinking about that from that mental model in mind, you want to kind of check those decisions a little bit harder and think, are these having a path dependence problem or not? And the opposite model is preserving optionality. Whereas if there's a choice where, you know, you're not really committing to something fully, that might be the better choice at the moment. Now, that can also have a cost, so you have to weigh that. But yeah, so tell me, was that counterintuitive? Yeah, I think it's not necessarily intuitive than a simple decision today could have, you know, cascading effects into the future. I knew that on the other hand, you have developers who will spend a lot of time trying to analyze what is the perfect choice. And the second model that you mentioned, this preserving optionality, may actually be a better use of their time. So perhaps you can make, for example, just a concrete example instead of, you know, trying to arduously determine which particular code package you want to use. Maybe you make an adapter so that you can switch those out in the future, right? And so that would be a good use of your time and energy. And we'll likely pay dividends in the future. And it's a fairly small investment. Exactly. A couple of related to that is just the model of analysis paralysis, which can happen to developers where, yeah, they just go way too deep into something that doesn't necessarily matter. Where they've already kind of reached diminishing returns on a decision. And so now that one's probably quite intuitive for a lot of people who are more intuitive. I mean, so I got two more if you're up for them. Yeah, let's go. So one that is, is I think very counter-tuitive people. We have a whole chapter on basically the statistics models that you need to know. And we try to stay away from the equations, you want to say you need to know all the underlying math. But we really think that developers and everyone should know kind of what the concept of statistical significance and how it's used like an A, B testing really means so that when you're part of a project that is using those techniques, you can really appreciate the numbers and the decisions that are coming out of that. And I won't get into the full explanation here because that would take a while. But I think that concept is one that people really should take the time to understand. And I think a lot of people, especially I see this a lot in our company, especially developers can get scared of it because it feels very mathy and statistics. Maybe they didn't take that. Maybe they felt that it was too difficult, say in high school or college. But there, I think I truly believe there is a way to understand it that anyone can understand and we did try to write that in the book. But I think it's worth taking time to understand that concept. Yeah, I'm going to share a personal story here because I think it's relevant to this discussion on statistics. And another really kind of deep dive discussion on how statistics can relate to developing beliefs, for example, right? I actually talked to Annie Duke about a similar topic, the idea that we have these beliefs that we develop over time and we typically kind of our brain tries to make those beliefs binary. So we either do or don't believe it. We don't have a continuous scale of belief. And her kind of message to the world is look at your beliefs more like bets. So how much would you bet on that? And it kind of breaks your brain out of that binary creation. So the personal story, my wife and I are expecting our second child. Congratulations. And thank you. And she recently has had this kind of odd symptom where her hands and her feet are itching. And so it's summer. It's probably allergies or something. It's hot. There's so many things that go on during pregnancy and her body. And so it wouldn't be surprising if there are some erroneous reasons why she's her hands and her feet are itching. And so we go to the doctor. And of course, we also have checked online to see what could be causing this. And one of the main things that might cause this, although it's still quite unlikely, is called colostasis. And colostasis is essentially an issue that happens both when you're pregnant and when you're not pregnant. But there's a specific kind that happens when you're pregnant. And so we have a test done. And we're actually still weighing on the results. And I assume that they're going to come back negative for this colostasis. And so we were discussing the possible outcomes. And my wife has done a little bit of research on colostasis. She says to me, you know, it's really likely that we're going to end up in the NICU. And I said, is it likely or is it more likely? And this was a moment where we were talking about statistics, but we were experiencing it in a very personal way. And so this idea that we should expect to end up in the NICU versus it's a little bit more likely than it was, but it's still incredibly unlikely. And so we still have this kind of statistically, we shouldn't believe that we're going to end up there. But because of a lot of factors which we won't dive into, it's easy to see the more likely and replace it with likely. Yeah, that's a great example. I hope everything worked out. Yeah, it's the risk of complication is fairly low. And I'm going to assume the things rationally. I should assume that things will turn out just fine. Well, I've won more for you if you like. Yeah, it's really a set of three models. I think it'll be useful for our drivers to internalize. And you might have talked about this before on an episode. But it's the idea of deliberate practice, which came from the man Anders Ericsson, who spent a career studying experts, world-class performers and athletes and intellectuals of different types, musicians, and how they got to be experts. And he identified this process, which he calls liver practice, as the best way to move up a learning curve on really anything. And the process is pretty simple. It really involves going to the edge of your competence right outside of your comfort zone and working on a specific skill, along the direction that you want to improve, and then getting real-time feedback from an expert who can help you coach you effectively or mentor you on what you're doing wrong. And it sounds very straightforward, but it's actually pretty hard to do in practice. In part, because you're failing a lot, and that's kind of hard to internalize. And so the two other models related to that are this thing called the Dunning Kruger effect, which was studied by these people named Dunning Kruger. And what they graphed was kind of how people feel as they're moving across this learning curve, and what they discovered is when you start out, you make a lot of progress on the skill almost immediately, and you feel really good about it, which is great. But then you over-project your confidence on the skill, and you think you're way more of an expert than you are at it. And then when you realize that you're not, whether that's pointed out to you or for some other reason, you figure it out, your confidence plummets. And you weigh over, compensate on a negative direction. And you're in this kind of trough of really under-confidence. And that is this third mental model called imposter syndrome, where you may feel that, especially when you're talking to experts who are farther up the curve, that you're an imposter, and you don't belong even working on this kind of skill. But that's not true, obviously. You're actually pretty farther along than the beginners. And so this method of deliver practice is really a great thing if you're trying to improve. But then you also have to be really wary of these psychological trigger models that you don't fall into. So if you're on the side doing the skill, you kind of want to be aware of that. And then if you're a mentor on the other side, you want to be aware of a help people go through this process, but be kind of understand that they can fall prey to these other models. Yeah, absolutely. We actually did an episode on imposter syndrome for the senior developer. It's actually something that is more common than you might expect, and I'm sure you actually know this. And we discussed the idea that a lot of our feeling, if you imagine getting in a car and pressing on the accelerator, that initial jolt going from standstill to 10 miles an hour is going to feel like you're progressing quite a bit more than if you were to be steady at 60 or 70 miles an hour. And so for a lot of senior developers, because they're not learning at the pace that they used to, it may feel like I've stagnated. But most senior developers are still cruising along at a high capacity. They're the ones that are on cruise control at 60 or 70 miles an hour. And just because they aren't feeling that momentum or I guess that acceleration, it can seem like things are not progressing at all. One survey that we uncovered as part of the research showed that across a wide variety of industries, about 70% of people felt that they had impostor syndrome at least one point in their career. So it's very extremely widespread. The other 30% were probably not telling me exactly. Yeah, it's probably everybody at some point. So these models that you have a lot more in the book and really getting a hold of a wide variety of these. And I would also add that if you were listening to this episode right now, another thing that has been really useful for me is to take models from other domains and other things like hobbies that I participate in. So for example, music. There are a lot of mental models that can come from music. One of them, as a quick example, the tunnel scale has 13 notes. If you count the beginning of the octave and the end of the octave. And you can start at any point on that scale and move through those notes at the same, I guess, distance between each note. And you can translate what's called transposing music from one key to another. There's nothing special about a given key as far as whether or not you can transpose that music over to another key. They all kind of mathematically they just shift. And so this is a model of thinking. If I can create software that is similar. If I can somehow find a way to modularize what I'm using so that I can shift it from one project to another, it's very similar in terms of, okay, it may sound different. It may, the outcome may be a little bit different, but that underlying model of transposability is applicable. So, I'd love to know, do you find that these outside practices that we have hobbies, interests, maybe even cross industry, that those are useful places to find models? Yeah, absolutely. I mean, that's effectively the premise too of writing the book. I mean, a lot of these models, and we covered some, but we didn't cover a lot of the ones from certain disciplines, come from economics and chemistry, like catalysts and activation energy or other physics ones. There's, we covered critical mass, but there's a bunch others, inertia and things like that that are widely applicable. And I think, you know, those are the ones I can easily enumerate because they're coming from major disciplines. But if you're working and you have a good sense of models from your hobbies and you see how they metaphorically help you in another situation, that's exactly the point. And it's helping you because you've internalized music because you've done it for so long, they mean they're wired in your brain, right, to see that way. And so now you can use that as a shortcut for all these other areas of your life. And I think that's exactly the point is you can do that and you don't want to just segment all of your knowledge and experience from music into the music part of your life. You can use those things that you learned and you work so well in music and apply them to code. And music in code is actually, or art in code generally is, you know, I remember Paul Graham has that book, Cat, Cress and Painters. I think there's a lot of overlap in those two disciplines in particular. Yeah, and I think the people who are listening to this show, they feel that. They can tell that there is a kind of a connection between those two things. And the same is true from your development life to your not development life. I had written down some of the ones from the book, which I talked about technical to at the beginning, that are actually from development that are really useful outside development. So I wrote down premature optimization, brute force algorithms, divided conquer algorithms, the MVP type of concept, which I guess is more product, but also can apply development. Those are all very useful outside of the development product world as well. Yeah, I very, very regularly use dividing conquer search algorithms for socks in my drawer. It's very strange, but it turns out it actually works in sorting algorithms. Yeah, maybe that's fine with you, my sock thing, to find stuff in my house. You know, I'm trying to find something. Yeah, if you put, for example, if you put things that are similarly sized into the same buckets, you're kind of doing a bucket sort, right? And it's literal, but it turns out that, you know, your mind can actually grasp the size of something a little bit better than it can grasp other aspects of it. And so it's easier to find something if you know where the size, you know, similarly sized things are. So Gabriel, I know we're kind of running up at the end of this episode, and I've enjoyed every moment of it. I do have a couple of questions, and these may open up into larger discussions that maybe we can have another time. But the first one, we've talked about DuckDuckGo a little bit. I'd love to know you've been doing this for a little over 10 years now. And if you could go back and kind of give yourself that 2008 or even 2007 pre-DuckDuckGo version of you, if you could give yourself, you know, one kind of quick lecture or piece of advice or, you know, picture of the future. What would you take back to that? It's interesting. It's probably several answers to that. But let me take it from a couple of different framings. In terms of like project success and, you know, things like that, at the beginning, we really didn't have, and it was just me at the beginning, as much of these kind of mental models we've been discussing, operationalized inside the process of deciding what to work on. And for the first many years, we worked on a lot of stuff that turned out to not be the right direction. You know, sometimes you got to do that, right? You got to take risks and, you know, you make experiments and sometimes they fail. But we went way beyond that, you know, building whole huge features and even kind of products that we could have validated were incorrect and de-risked that as another mental model way earlier. So one advice I'd give is probably like, if I could give the blueprint of some of these things of how we operate now with those templates and objectives and really those forcing functions to question what we're doing, I think that's probably the single biggest thing I could do out of anything. Of course it would be prescient to want to know the future. That's probably the silly answer. Yeah, assuming that giving you the future wouldn't change it. Yeah, exactly. But I think that's probably the real answer is, you know, if the future is still uncertain and we operate in a very fast moving and most developers offer in a very fast moving technological industry where a lot of things are uncertain, you want to operate in a way that you can breathe very nimble and figure out what's going on through experimentation very quickly. And I think we weren't or I wasn't as agile, you know, when I was starting as I could be. Yeah, it's really important to think about these models. And I know we've at this point where we're just we've said the word model. I'm not sure if that's the right answer. But you know, I mean, they really are. They're kind of like a map, right? And it's such an interesting concept because it's not really a specific map. It's more like navigating skills. You can think about it that way. So I have two more very quick questions for you. The first one is when they like to ask all my guests, what is one topic of discussion that you wish more people would ask you about? I don't really don't have a great answer to that. You know, there are other things I'm interested in that I don't get to talk about a lot, you know, but I'm also not like the world's expert at it. And so I don't know if I deserve to talk about it at this point, but I like to talk about these subjects. And you know, some of the things that are currently fascinating, me are actually like a developer topic around evolutionary algorithms and a policy topic around why things cost so much called I think it's Brumel's Cost Disease of like education and healthcare and infrastructure at least in America has just the cost of going on up and up without much to show for it. And no one really knows why. And so I'm super interested in that, but you probably shouldn't ask me about those things because I don't know the answer. Well, it's, you know, talking about a subject, I think you mentioned something kind of interesting then, you know, you don't deserve to talk about it. And I think it's, you know, one of the things that I think developers often get wrong actually relates directly to that. It's the idea that everything you do must necessarily be to some professional end. And I know that you don't necessarily agree with that, but I do think that you should have the opportunity to talk about that. Thank you. I definitely want to research it all. I mean, these are kind of on the hobby side and then ultimately they turn into the professional side if it gets, if I get deep enough into it, you know? Yeah. Well, I think, I think, you know, going back to what we've been discussing this whole episode, it's, you really have the ability to take you and others who, who study models. You can think about these things and think about them thoroughly and engage almost any topic of discussion and start to get your hands around it. That's a key lesson. So, I mean, it's one that I love to underscore and we wrote in the book and I really believe is that, you know, with the power of models, but also just the power of just people are good at learning things. I think people end up having, especially after they have a career for a while, a very static view of their abilities. But in reality, you could really become an expert using deliver practice or other things that really anything. If you just spent enough time, you know, researching and practicing. And so, I definitely believe that, you know, certainly if I put effort into these topics, I could be back here in a couple of years, being an expert on it for you. It really is really just putting it in the effort. Yeah. And nobody gives the expert badge out anyway, right? So most of the time, expert is one of those kind of soft terms that we self apply or that ends up being applied. And a lot of it is just about learning and spending time with the subject. Exactly. Well, Gabriel, I have one last question for you and this is, I think I might be able to predict the answer, but we'll see. If you could give developers who are listening to the show, regardless of their experience level, just 30 seconds of advice, what would you tell them? Hmm. I'm curious what you predicted. I think my advice would be to figure out, I mean, I think I'd start with what is that North Star and figure out what it is you actually really want to do. Like, we have a lot of people now who are conducted at Go, and that's a core question that we try to deter for people because every kind of, you know, some people don't have any ability necessarily to choose their projects, but there's often a wiggle room of kind of what exactly you work on and even what job you choose. And if you have that North Star and you know where you want to be, whether that's, I want to be a generalist or I want to be a specialist in this subfield or I really like working on this type of thing and that makes me happy. And you know that you can really make yourself a lot happier in life. And if you don't have that North Star to really answer that question, you can just really feel adrift. And so my advice is probably that, which really is not just for developers, really for everybody. Yeah, my prediction is that you would say to be deliberate and kind of rather than just trying whatever random thing comes along that deliberate, you know, whether it's deliberate, practice, deliberate thinking, really deciding is the critical skill. And what you're saying about having a North Star, I think, is kind of step one of being deliberate. Yeah, exactly. I mean, I agree with that. I mean, everything that, you know, we try to do and I try to do it right down here is, yeah, the word for that would be being intentional, right? And critically thinking about whatever it is you're doing. Really engaging the topic fully. Yeah. Gabriel, this has been an excellent conversation. Thank you so much. I'd love to know. This book comes out on June 18th. Correct. And people can find out on Amazon. You can pre-order it now, I believe. Yes, you can. There's more info at superthinking.com. And if you are not an Amazon fan, there are other ways to pre-order it, but you're welcome to use Amazon as well. Excellent. Thank you so much, Gabriel. Thank you. Thank you again for listening to Developer Teain 2019. This has been such a great year for the show. It's been a great year for me personally. And I am so grateful to have this audience of people who have listened to the show. Some of them for five years. We started this five years ago coming up on January 5th. That'll be our five year anniversary officially. And I couldn't express my gratitude for what this has done for me personally and all of the people that have come into contact with as a result of this show. So thank you so much for being the community of people that you are. Thank you so much for listening. If you don't want to miss out on episodes of Developer Teain 2020, I encourage you to subscribe in whatever podcasting app you're currently using. Thanks so much for listening. My name is Jonathan Cutrell and until next time, enjoy your tea.