« All Episodes

Meta Models - Logarithmic Returns

Published 4/2/2025

This episode introduces a valuable meta-tool for understanding the generic shapes of models, focusing specifically on the concept of logarithmic relationships and how they manifest as diminishing returns in various aspects of our lives and work. Understanding these patterns can help us make more informed decisions about where to invest our time and resources.

  • Uncover a meta-tool for understanding generic model shapes, specifically focusing on the concept of logarithmic relationships, which operates at a layer above specific mental models.
  • Learn about logarithmic complexity as a concept often encountered in algorithmic analysis and graphing math, characterised by a curve where the slope continuously decreases.
  • Discover how diminishing returns serve as a colloquial way to understand logarithmic relationships, where each unit of input effort yields progressively smaller returns in value or output.
  • Explore examples of where diminishing returns are evident, such as increasing the reliability of a system through quality improvements, estimation efforts, and the value gained from time spent in meetings.
  • Understand how learning processes often follow a logarithmic curve, with rapid initial gains that gradually diminish with experience.
  • Grasp the connection between logarithmic returns and the Pareto principle (80/20 rule), where a small percentage of effort often produces a large percentage of the value.
  • Recognise the importance of identifying the threshold on a logarithmic curve where the returns on further investment become minimal, aiding in more effective resource allocation.
  • Consider how our natural perception might not align with logarithmic realities, potentially leading us to overvalue continued effort beyond the point of significant return.
  • Learn how understanding these fundamental input-output relationships can empower you to make better decisions about where to focus your time, effort, and resources.
Transcript (Generated by OpenAI Whisper)

in today's episode i want to give you a tool that you can use it's actually kind of a meta tool and most of the time on this podcast we talk about specific things like mental models specific tools that are directly applicable in this episode i'm going to teach you a layer above that and this is kind of a generic shape for models that you may encounter and if you've done any kind of work in for example algorithmic analysis then you probably have an idea of this concept that we're going to talk about today and really any kind of graphing math you should understand this concept as well but the idea that's that's covered in algorithms is probably most directly applicable and that is the idea of logarithmic complexity and more specifically i want to talk about logarithmic relationship so in in your algorithms class you may have talked about big o notation and it would have been you know big o log of or log of o maybe is is the way it's notated it's been a while since i looked at big o notation and the idea is that over time the uh the amount of time that a particular operation takes reduces logarithmically if you don't know what a logarithmic curve looks like it's probably best for you to google it but it essentially if you were to draw a straight line uh from the bottom of the graph you would be able to draw a straight line from the bottom left of a graph to the top right of a graph uh the logarithmic line would be entirely below that and it would start out as a curve that looks similar to that linear uh kind of line directly across it starts out at that slope and then uh it's going to curve off right that's approximately how you can think about it and the specifics of that are less important than the logarithmic line and the logarithmic line is going to be the same as the relationship as the graph moves out to the right on the far left of the graph the slope uh is its greatest and the slope continuously decreases the further you go to the right now interestingly uh the logarithmic function has a some similar properties to an exponential function there is for example a limit uh on a logarithmic function and we want to talk about uh in today's episode some of the things that might fit a logarithmic function and what you should be thinking about is the x-axis is not just time or iterations but instead some other variable all right so uh i want to talk about some of the some of the models that might fit this the kind of colloquial model that you would think about here or kind of a trigger term that you can look for is diminishing the logarithmic function of a logarithmic function returns diminishing returns what does that mean it means for every input of effort let's say unit of of effort you receive some amount of returned value right the returned value might be for example uh let's say that your effort is sales calls right and the returned uh returned value on your sales calls is uh you know answers okay so we could look at the return value or the the likelihood that there is some kind of logarithmic uh limit to return on sales calls and for most purposes that would be uh unlikely to be true right and the reason for that is because the the number of sales calls that are answered uh is not necessarily directly related to the number of sales calls that are answered correlated to the number of calls that you've made so call number five is probably about as valuable as call number 50 and call number 50 is about as valuable as call number 500 if of course you are counting value as the number of people who answer right so uh in this system the the likelihood of this model fitting is is very low but what is another model that does have diminishing returns one good example of this might be the model that does have diminishing returns reliability of a given system so given a specific uh kind of system architecture right the likelihood that you are going to be able to increase the the reliability of that singular system uh through improvement of quality let's say right you're going to go bug hunting you're going to increase your coverage you're going to increase your sales calls you're going to increase your uh you know pressure test the system the likelihood that you're going to get a highly reliable system through this method is logarithmic in other words the more you put into it uh the slimmer and slimmer the gains are at the top end now the reason for this is fairly simple in the earliest parts of that effort you're going to find low-hanging fruit you're going to have a lot more potential bugs and you're going to have a lot more potential bugs and you're going to have to find and it takes more effort later because the system has improved and therefore the likelihood of a bug is much lower another good example of this is any kind of estimation effort that you do we talk about estimation in the show uh probably too much at this point it's so much of our jobs to try to figure out what's going to happen in the future but we have diminishing returns when it comes to estimation and the reason for this is because at some point in order to determine all possible futures it becomes an exhaustive exercise where you're having to play out all possible futures eventually you get to the point where doing the work is actually cheaper than trying to predict the work but the truth is we rarely need to go beyond these limits we rarely need to identify a true 100 or even 99 accurate estimate and this is the trick and probably the most important aspect of these particular types of models that is to know where that diminishing return curve actually crosses some threshold that you care about this is the fundamental idea behind the Pareto principle or 80 20 if you've heard of this the idea is that 80 percent of the value comes from 20 percent of the effort if you think about what that means that means that the first 20 percent you have a high value well think about that logarithmic curve the next 80 percent produces much less value and you could imagine that the first five percent probably produces more than the next five percent and you could you could also imagine that even going up to let's say 30 percent effort may produce even as close to 85 or 90 percent of the value depending on how that curve shakes out and that's the important part of this model understanding where to stop or understanding how far to go when those diminishing returns actually kick in very often meetings also follow a similar logarithmic curve the amount of time spent in a given meeting likely produces diminishing value many of our learning processes also have a logarithmic shape to them so for example let's say that you are new to hiring this is your first couple of interviews that you've ever done and you seem to be making a high rate of money and you're not making a high rate of mistakes over time as you gain experience your mistakes will lessen they'll lessen more and more but you'll never get quite to zero mistakes right the quality then is what's following this logarithmic curve the quality starts out as relatively low and quickly you gain experience and you learn a lot in those first handful of meetings and you're going to learn a lot in those first handful of interviews but once you go to interview number let's say 200 you've probably only learned a marginal amount from what you learned in interview 199 or even 150 so there are diminishing learning returns and that's true in most situations where you're learning by experience the curve of your learning is likely going to have a logarithmic shape why is this important well if we can understand the relationship between different inputs and outputs and this is fundamentally when you think about different mathematical mental models this is a fundamental mental model that's kind of a tongue twister if we understand what those inputs and outputs look like we can start to make better decisions about where to put our time for example you may imagine that something is logarithmic but it turns out that it's polynomial if you want a good example of this google the dunning-kruger curve we don't naturally think in these curves very often it's possible that logarithmic is perhaps slightly more natural to us because we do encounter it so often in our lives but many times we behave as if the return on investment in a logarithmic situation is linear. And sometimes we even behave as if finishing those last few things has exponential value. And there's a bunch of different kind of cognitive distortions that can come from our perception, our perceived value of a given investment, for example. But if we can set out and understand, especially when we're investing large amounts or when we have some very important input-output relationship, if we can set out and understand those base models that we expect something to follow, then we can be a little bit more sensitive to whenever the return or whenever that output, that Y value, gets to some threshold that we care about. Thanks so much for listening to Developer Tea. I hope you enjoyed this episode. I hope you will consider these models as you go forward, especially when you're investing in a large amount of money. And I'll see you next time. This specific logarithmic model, try to find it in your day-to-day life. I think you'll be surprised at how often you see it and how often it could be clarifying for you on how you can better spend your time, your efforts, your resources. Thanks so much for listening. And until next time, enjoy your tea.