« All Episodes

Great Reviews and Terrible Tacos - Sharpening Substitute Questions with Counterfactuals

Published 6/18/2025

This episode delves into the use of substitute questions—simpler queries we use to answer more complex ones—and the crucial concept of cohesion between these substitutes and our true objectives. You'll learn how to leverage counterfactual thinking to scrutinize your assumptions and enhance the effectiveness of your decisions. Discover two powerful counterfactual techniques: asking "what else could be true?" to reveal alternative explanations, and employing thought experiments to, for example, precisely define your desires and career aspirations. The discussion offers practical applications, from refining hiring processes by identifying high-cohesion interview criteria to avoiding confirmation bias in debugging. By adopting counterfactual thinking, you can significantly improve your analytical skills, make more informed choices, and build robust strategies.

  • Uncover how cognitively taxing questions lead us to use substitute questions as heuristics, and why understanding the cohesion between these is vital for accurate decision-making.
  • Learn to implement "counterfactual thinking" to rigorously check your heuristics and substitute questions, ensuring they effectively align with your actual goals and underlying evaluations.
  • Discover two key counterfactual techniques: exploring "what else could be true?" to identify alternative explanations for observations, and conducting thought experiments to clarify nuanced personal and professional desires.
  • Explore practical applications of counterfactuals to drastically improve processes like hiring, by challenging low-signal interview criteria (e.g., LeetCode problems) and making more predictive assessments of candidates.
  • Understand how counterfactuals can combat biases like confirmation bias in problem-solving, such as debugging, by prompting you to consider alternative causes and avoid poor pathways of biased logic.
  • Realise the transformative power of counterfactual thinking in refining your thinking process, improving your career trajectory, and enhancing departmental operations by identifying and improving low-cohesion substitutions.

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com..

📮 Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)

we've talked about substitute questions on the show before the idea of a substitute question is that you'll take a cognitively taxing question and replace it without your own realization of this replacement you'll replace it with an easier to answer question this question operates as a heuristic a way of answering something close to something approximating or pointing towards the original concern so for example you might ask the question what is a good restaurant in my area and then you'll substitute the question what restaurant has good reviews in my area now for many people these are synonymous questions they believe that a restaurant that has good reviews is entirely going to be representative of a restaurant that is good the difficult part is that defining what a good restaurant is is cognitively taxing that is it's very hard to take in all of the variables that might be necessary to define good for a given person very often we will substitute questions by looking at historical answers to the same question rather than future casting answers in other words what do you want out of your career may be answered in multiple ways for example what have you enjoyed in your career previously or another substitute question might be what have i imagined my career might look like this imagination this visioning of your career very often turns into what you expect from your career this is true in a lot of our life experiences we begin to desire or expect the things that we imagine are most likely to happen there's a lot of reasons for this our desire for stability for example additionally the pain that we experience when something unexpected occurs amazingly sometimes this pain is felt even when the unexpected thing is a positive thing something that we otherwise may be able to on many subjective measures say was a good occurrence these substitutions happen all the time and sometimes we do them consciously as well for example we substitute a very difficult or perhaps impossible to answer question like will i enjoy this car if i purchase it now will i enjoy it how long might i enjoy it another example of this might be will this person this candidate that i'm considering hiring will they do well in their role these are questions about the future questions about uncertainty and so what we do instead of trying to answer these impossible to answer questions is we break them down into various criteria that we hope correlates to an answer we try to imagine what kind of thing will i appreciate about a car in the future and does this car match that car i'm considering hiring what i'm considering hiring what i'm considering hiring kind of thing? What kinds of things predict whether a candidate will be successful? All of these criteria that we are using are various types of substitute questions or a substitute operation, multiple questions that we're substituting to try to approximate a belief or an assertion about another question. So you could kind of formulate the substitution as if you had a question X, you're going to instead answer question Y because it's much easier to answer. And you're going to say, because of answer Y, I believe the answer to... Question X is something. I believe that because there are a lot of reviews, positive reviews for this restaurant, then... So that's the question or that's the substitute question, question Y. Since there are a bunch of reviews, positive reviews, then my belief about question X, which I'm not going to try to answer directly, but instead I'm going to use the information from question Y. My belief is that... My belief is that it is a good restaurant. Right? My answers to all of the various criteria for this candidate are X, Y, Z, right? And therefore, because of the criteria question answers, my belief about their potential is that they will do well. Now, of course, there are plenty of ways that this can go wrong. There are plenty of reasons why... There are plenty of reasons why... Our heuristics are not always tuned perfectly. And I'm going to give you a tool, a very simple tool that you can use to try to check your own heuristics, try to check your own substitute questions, those substitutions, to determine whether your heuristics are actually meaningful or not. Right? So, in effect, what you want to do is determine how well correlated your substitution is with the thing you're trying to substitute. That is the fundamental task at hand. You're trying to determine if your substitute question has a high cohesion to the question that you really care about. So, let's take our restaurant review example. This restaurant has a bunch of good reviews, and therefore, I believe it is a good restaurant. What I want you to do is ask the question, is that necessarily true? Right? So, that's going to give you a very clear cohesion if it is a 100% cohesion. Right? So, then, really, your substitute question isn't really a substitute question. It's more like an evidence question or a measurement question. It has a higher cohesion rate because it's not really asking a different question. It's just asking a different way of the same... Okay? This is kind of what our brains are tricking us into believing we are doing with all of these substitute questions. But if there is a possibility that that is not necessarily true, right? That a restaurant that has a bunch of good ratings may not necessarily be a very good restaurant. The way you arrive at this is called counterfactual thinking. Okay? So, let's do that. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. explanation. What else could be true? What is another good explanation for why there may be a bunch of good reviews? Perhaps the restaurant rewards people for leaving a good review. Maybe they have hired a bunch of review farming, a bunch of people to leave reviews that never even ate at the restaurant. There's a bunch of possible counterfactuals. As it turns out, there are plenty of opportunities for those sites, for example, that host those reviews to try to reduce those counterfactuals. If you've ever seen the verified buyer reviews, this requires that the person who's leaving a review has actually bought the item. That's trying to cut down on some of the counterfactuals. To increase the cohesion between that substitute question, is this product a good product, with the substitute question for that core question is, how many positive reviews or what are the star ratings on Amazon or whatever? Increasing that cohesion rate, we're going to address some of the counterfactuals. What do you want in your career? This one's a little bit more complicated. It's a little bit harder to come up with counterfactuals here because you may say, I want to continue making money. I want to continue being better at my job. I want to get another promotion. All of these things are hard to say, well, what else could be true? That's not really how we would do a counterfactual in this situation. Instead, what we would do is we would play a few thought experiments out. Let's say that your intent is, to get a promotion. A thought experiment might be, would you want a promotion if it did not include a pay raise? Most people would say, well, that's not realistic. Thought experiments, fortunately, don't have to be realistic. The whole idea here is to produce some kind of counterfactual thinking. In other words, you are poking at the question and you're saying, you mentioned that you want a promotion. Is that, actually, the thing you want? Is that precise enough? Is the thing that you want a promotion and a pay raise? If you were to say, okay, well, I do want a promotion and I do want a pay raise, another thought experiment you might run is, are you willing to work 60 hours a week for a promotion and a pay raise? The answer might be, well, of course not. Nobody would ever make me do that. Once again, it's a thought experiment. You can explore to find out more and now you might adjust your assertion that you want a promotion, a pay raise, and a balanced work environment such that you don't have to work more than 40, 45 hours a week, something like that. Or it may be that through this exploration, you realize, you know what? Actually, I don't really care so much about the pay raise. I don't care so much about the promotion. What I really want is to reduce the overall amount of work. I don't care so much about the promotion. What I really care so much about the work I do. I'm actually okay with how much I'm getting paid, but I'd like to work less and get paid the same amount. There's a totally different career goal. Through this thought experiment exploration, you may realize that actually what you really want is to get paid more per hour, not a total amount more. When you're looking at these assertions and you're looking at these substitute questions that you ask yourself, counterfactual thinking, either through that what else might be true frame, that's a little bit easier in that first example about restaurants or product reviews, what else might be true, and counterfactual thinking through the lens of a thought experiment, these are both going to provide you insight for talk about hiring, for example. You can imagine the feedback for a good candidate coming in that says, he immediately solved the coding problem. Therefore, I believe he is a good engineer. She asked great questions. Therefore, I believe she is a good communicator. These things are substitute criteria. You're trying to evaluate whether someone is a good communicator. You're using a very simplified set of criteria to try to gain signal. If you've been interviewing for very long at all, you've heard that word over and over. Signal, in this case, is some kind of indicator. Again, this is a heuristic, some kind of indicator that tells you another thing about a person. If you were to simply ask a person, are you a good communicator? They're always going to say yes. That's not a good indicator. Instead, you're looking for signal. You're looking for some kind of sign that this person is a good communicator. You might use the heuristic, they asked a question. They asked a good question, a thoughtful question, something that wouldn't necessarily come to mind. They are a thoughtful person. They are a good communicator. All of these are heuristic assumptions. Even the good communicator criteria ends up being yet another substitution for, will this person communicate well when it really matters in their job? Let's think about a couple of these interview question or outcomes that we could apply a counterfactual to. Let's say that you had an interviewee that did poorly on, let's say, a leak code style question. The immediate assumption is this person is not going to cut it. They don't have the chops that they need to have as an engineer. But one counterfactual, and actually a pretty compelling counterfactual, is that this person has not been practicing leak code in quite some time. Instead, they've been working. They've been doing actual engineering work, which very rarely actually requires the kinds of skills that you use when doing leak code interviews. In an odd way, counterfactual thinking may inform us that people who do exceedingly well at leak code interviews are good at, well, leak code interviews. They may actually have a negative signal or perhaps even a neutral signal here when somebody is particularly good at leak code. Now, it tells us something. It doesn't necessarily tell us only bad or only neutral things. There may also be some good things. There's a signal here that the person is willing to put in effort, for example. There's also a signal here that might suggest that the person is able to comprehend, complex topics. Leak code can often be very complex. And so if they can comprehend the leak code complexity, then they probably can also comprehend the complexity of whatever is in your domain model. But the leak code measurement itself, as studies have shown, and this is kind of interesting, studies have shown that it doesn't really predict future job performance. So if your goal as an interviewer is to answer that core question, how well will this person do in their job? And it turns out that leak code as a criteria is probably a bad substitute question. Success on a leak code interview may not be predictive of success on the job at all. So what these counterfactuals do is they help you hone in on better substitutes. Now, it would be unrealistic for us to say you need to answer the core question. The only way to answer the question of whether somebody, who will succeed on the job, is to hire them. And the whole purpose of having the criteria that we have in an interviewing process is to actually perform the substitution in a high cohesion manner. In other words, we want to find substitute criteria that allows us to test this criteria in advance, in the cheapest way, and the highest signal way as possible. That means getting rid of your leak code questions, most likely. That's a low signal or a mixed signal way of looking at your candidates. And we can use counterfactuals to drive what kinds of things should we be talking about? What kinds of things should we discuss? If you're looking at a candidate and you're trying to understand why they behaved in a particular way, you're looking at feedback from the candidate, ask what else might be true? Is it possible that the thing that, is being presented in this feedback about this candidate, that there's actually more to the story? There's another reason. There's another reason why they got this feedback. Now, once you've done the counterfactual, here's a critical part of this, and I don't want you to, you know, miss this particular aspect. There are potentially many, many, many explanations. There are many explanations. It's possible that that person was, extremely distracted. It's possible, right? If somebody is distracted in an interview, they're not going to do as well as they would if they were fully focused. Now, you should ask yourself next, how likely? For my counterfactual, how likely is it that that was the case? Now, whether or not your likelihood percentage or your ratio or whatever you want to use here in terms of the metric, you're going to have to ask yourself next, how likely is it that that was the case? If you're saying, oh, it's, you know, 20%, 25% likely, or it's 75% likely, these ratios, these thresholds, whatever you want to call these, are fairly arbitrary. It's not necessarily the case that you should care about any particular percentage. You may have something that you care about at a 0.001%, right? Think of the justice system. If you're saying, oh, counterfactuals are very important when you're sitting on a jury because of this simple phrase, beyond the shadow of a doubt. What percentage would we assign to the shadow of a doubt? If there is some counterfactual that has even a 0.1% likelihood, then there might be what some would call reasonable doubt in that situation. And so the point of this is, is not to identify a particular percentage, but instead to recognize what percentage you begin to care about. If I have an otherwise qualified candidate for a particular role, and I run some counterfactuals, and I think, okay, this person failed one of, you know, multiple interviews, right? They failed one of many interviews. Otherwise, they're fairly well qualified. And I'm going to look at the interview feedback, and I'm going to try to determine counterfactuals. Is something else possibly true? Maybe one out of five, one out of three chance that something else is true. And I'm looking at all of the information together. I may make the decision that this interview itself used the wrong substitute questions. We took something to mean one thing when actually the counterfactual is strong enough that we should use it. And so I'm going to look at all of the information together. And I'm going to look at all of the information together. And I'm going to look at all of the have taken it to mean that thing. We need to ask more higher cohesion questions. We need to have closer cohesion between our substitute question, our heuristic, and the core thing we're trying to figure out the criteria in this case, for a candidate we're trying to hire. Thanks so much for listening to today's episode of developer tea. I hope you will begin to think more critically about your substitute questions, the cohesion rate between your substitute questions and the criteria in this case, for a candidate we're trying to hire. your substitute question, your target question, or your underlying evaluation, the things that you actually care about versus the cognitively cheaper thing that you're asking. And try to come up with better questions by using counterfactuals. This is a way to improve your hiring process pretty drastically. It's a way to improve the way that you think about your even debugging, right? We can, we can end up falling prey to a bunch of biases around debugging, confirmation bias. We believe that a particular bug is caused by one thing. It turns out it's caused by something totally different. But as we were debugging, we substituted a bunch of kind of debugging steps for our confidence, right? Instead of debugging properly, we used confidence. We used our preexisting belief to trace down a particular path. Turns out that path, was wrong. And so we fell prey to confirmation bias. We could have used counterfactuals, could have used counterfactuals in the, in that situation. What else might be true? And how likely is it that that thing might be true? And, and that could help us avoid these, these poor pathways of, of biased logic. Thanks so much for listening. Hopefully this, this thoughtful, this conversation is thoughtful to you and, and that you would adopt some of these ideas. And I hope that you've had a great day. Thank you so much for joining us today. We'll see you next time. Bye. See you soon. Thank you.