What is your plan for today? Where are you going to go and what people are you going to interact with? A lot of the decisions we make become automatic. That's what we're talking about today with Max Hawkins.
In 2018, Linode is joining forces with Developer Tea listeners by offering you $20 of credit - that's 4 months of FREE service on the 1GB tier - for free! Head over to https://spec.fm/linode and use the code DEVELOPERTEA2018 at checkout.
If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea
Transcript (Generated by OpenAI Whisper)
I want you to take a moment and dedicate 100% of your brain power to listening to what I have to say. It's kind of hard to do because most likely, if you're listening to this episode of Developer Tea, it's not your first, then you probably have other things going on. You probably have a ritual, some kind of behavior that you've adopted that allows you to come put this show in the background. In a way, you have turned on autopilot. Now, I'm not talking to a specific person, I'm talking to everyone who's listening to this show because this is something that we share as humans. But let's say that you're giving this episode 100% of your attention. I want to ask you one simple question. What is your plan for today? What are you doing today? Where are you going to go and what people will you interact with? What time will you eat lunch? And what route will you take to go from point A to point B? It can be a remarkably eerie and even unsettling idea that a lot of these decisions we make relatively automatically. We make the same decisions so many times that again, our brain goes on autopilot and we aren't really fully thinking about what we're doing necessarily. That's one thing that Max Hawkins decided to change about his own life. And because we're so interested in this concept here on Developer Tea, I invited him to be on the show. My name is Jonathan Cutrell and you're listening to Developer Tea. And today's episode is an interview with Max Hawkins. Max was also on Invisibility. If you haven't listened to that episode, it's probably a good idea to listen to that one in addition to this one because we do reference it quite a few times. My goal on this show is to help driven developers connect to their career purpose. For Max, Max has found a purpose for himself. We won't be giving that away in the intro. So I encourage you to listen to this whole interview. It will be in two parts. So if you don't want to mess out on the second part, go ahead and subscribe and whatever podcasting app you're using right now. Let's get into the interview with Max Hawkins. Welcome to the show, Max. Hey, how's it going? Oh, I'm doing very well. I'm excited to have you on the show. I think most, well, maybe not most, but quite a few people who are listening to this episode are probably coming over from Invisibility and they probably searched your name hoping that there was more content out there. I know after I listened to those initial episodes, well, the first episode and the short follow-up episode that I wanted to hear more. And I even recently, in fact, today, I went and downloaded the randomizer app. That's not the right word for it, but that's the Facebook event random app that you had created. And just really interested in some of the philosophies and the ideas behind what you're doing. So I'm really excited to talk to you about all of those subjects in today's episode. Well, I'm super excited to be here. Thanks so much for the invite. I want to jump in head first and start us out on this path that's talking about randomness. Randomness, entropy, whatever word you want to use. What word do you prefer? You use the word random quite a bit when you're talking about this stuff. Is there any kind of science behind what word you choose and how that applies? Yeah, there really does need to be a different word. I just haven't coined the alternate word yet. Randomness is a good one because most people know what it means, but it's also a bad one because it can mean multiple things. I'm interested in chance, unexpected things coming from a place where we're not in control of the decision. There's everything in the set of things to be chosen as likely as every other. But randomness can also mean things that are unexpected. That's not necessarily what I'm interested in. Sure. That totally makes sense. Random events and more importantly, perhaps even unwanted events occurring that I think a lot of people might think about randomness in that particular kind of perspective. That's not really what you're talking about. I see in my mind a graph or maybe a histogram of events and their likelihood of occurring, or I guess events is not the right thing, but options or pathways that you may take in their likelihood that you will take them for the average person. This is really the thesis of the work that you do. The average person, that histogram is going to be very lopsided. There's going to be a very limited set of pathways that are commonly troughted and that they take all the time. Love for you to talk about, what is wrong with that common path mentality? Well, I think it's not necessarily a wrong. I think a lot of great things come from following the most optimal path to a result. But I think you also lose a lot. There's a trade-off between doing the thing that's most likely to get you the best result and sort of exploring more. I'm curious about leaning into that exploration in the most extreme way. It's like, what is the opposite of optimizing for the best result? It's just saying every choice that you make is as likely as every other. Doing things randomly opens you up to that sort of serendipity. I like the word serendipity for this because you very often, you kind of refer to your computation or a computer as almost in collaborative terms. You allow the computer, we were talking before, we started recording, you allow the computer to define where you will live. It does so using some set of probabilities. In my mind, I'm just thinking of the JavaScript math.ranum function. There's some kind of weight that you're applying to all the options and then essentially spinning the wheel. Do you see computers as a collaborative agent in this process? Yeah, that's the way I think about it. The reason why I tend to anthroporphize the computer is because I think it makes it easier to wrap your head around what's going on. If you start to think about the computer as having intentions or as wanting something, it allows you to focus in more clearly on the patterns that are happening. When I first got started with this whole path, one of the first things that I did was I wrote a program that sent me to a random business in San Francisco. It sent me to pizza shop in the mission or a FedEx can goes in the financial district or just some place that was completely randomly chosen. What I tried to do was to assign some sort of meaning to that and see what it did. I found that this combination of choosing randomly without any sort of my preference involved and without any reason behind it and then trying to figure out what the reason could have been is a really powerful technique for getting your imagination going. Yeah, I couldn't agree more with this. It's such an exciting concept. We've talked on the show quite a bit in the past about perception of time and how people learn and experiencing new things, forcing your brain to start lifting a little bit of a heavier load. The idea being that anytime you experience something new, your brain is essentially experiencing and this is all theory. Of course, there's not a really a good way to prove it because it's all about perception anyway. But you're experiencing time a little bit differently because you're having to process those events more thoroughly than if you were to go down that, once again, that well-traumatic path, the way that you go most commonly. A very simple recommendation that I give people on a regular basis, developers are not, is to take a new way home. That sounds so simple, but you're kind of the extreme version of the take the new way home concept. Have you found that it has shifted your perception of how time passes or memories for that matter? Do you have richer memories as a result of doing this? Yeah. I think the reason why I got started with this is because I really like this feeling of novelty or discovering a new thing. There's a way in which it focuses your attention when you encounter something new. It gives you that sort of beginner's mind feeling where you don't really, because it's something that you haven't seen before, you don't know what it is. That makes it, there are more possibilities embedded within that. I think the reason why I got into this is because I was interested in that feeling. The reason why I stuck with it is because I think that it's much more difficult to get that sort of serendipity in the world that we're living in right now. Because our lives have become so optimized. You log on to Amazon and it already knows what you want, right? Because the algorithm has figured out that based on your previous browsing history, this is the thing for you. In some sense, they're right. You look at it, that's the thing that I want. But in other sense, they're not because there's no space for serendipity. There's no space for discovering something new. I'm interested in reclaiming that a little bit. I have a question in my notes here that walks right down this path because I've done a little bit of study on recommendation engines and how those things work. I'm very interested in it because it is such a phenomenal thing that we've been able to accomplish. We could open up the discussion towards machine learning or AI or whatever you want to call it, but ultimately comes down to prediction. I'd love to know, I'd love to hear your opinion, maybe, of how we as developers, we may be able to be more responsible with recommendations to introduce that serendipity. Because it's kind of a chicken and egg problem, or maybe it's just a choosing the thing you want to optimize. But things that people are likely to enjoy, for example, we end up going to a local maximum for that person. It could be that I've never taken dance classes and I have no intention of doing that, but it could be that I would enjoy that immensely and I just don't know. Nothing, no algorithm would be able to guess that about me because there's, unless something very strange that I don't understand about myself is true about many others like me, that local maximum seems to be, it's like an echo chamber, it kind of limits us in. I wonder, is there something that we as developers can do because we really kind of drive some of this stuff, especially for developers who have more creative control over products they're building? Is there a way that we can recommend more serendipitous options for people that don't necessarily break so far away that they alienate people? Or is this just a challenge that you have to accept that it's going to be uncomfortable? It's going to be something that is outside of your norm and recommendation isn't really about making you happy, it's about making that option new and novel. Well, I do think that as developers we have a big responsibility to be thinking about this sort of stuff because more and more of these algorithms are deciding what everybody sees and increasingly what everybody does. If Google Maps tells you to take a left and you follow the instructions. It's important to be thinking critically about the assumptions that these sorts of algorithms are making. I believe that the critical oversimplification of the current batch of recommendation algorithms is that it's assuming a fixed set of preferences. It's saying like a person is this vector in a vector space and everything that they're telling to us is sort of revealing that hidden latent preference. In my opinion, that's a huge oversimplification because oftentimes the computer will send me to things that I wouldn't have liked or previously didn't like but then when I get that suggestion again, my mind changes. I think that everyone has done a real disservice by this oversimplification. I forgot your original question. Also you said that we have a recommendation algorithms and the matrices of our preferences have us headed in one vector direction. There's this assumption at the core of a lot of these recommendation algorithms that our preferences are fixed. What you're doing when you're going to YouTube is putting in inputs that are revealing your true set of preferences who you really are. If you give YouTube enough data, they're going to figure out what that set of numbers is and they can give you just the right videos. In my experience, that's not how it works. Your preferences change. You discover new things that push you in different directions. I think it's really a problem that the recommendation algorithms are pushing us into these bubbles where we're seeing the same things over and over again because it's assuming that there is one set of things that we want. Another aspect to your question, which I've forgotten. I think overall that covers the spirit of the question. I think the truth of the matters that these bubbles are created for the purpose of lowering the dimension of complexity for messaging. In order to read something, it's essentially taking what otherwise could be a very highly detailed and high-dimensional set of preferences and reducing it to something that is relatively predictable. That's not to say that these algorithms are not complex. That's not the message that I want to send. In fact, many of them are pretty amazing in how effective they are in, for example, increasing sales or whatever particular goal, increasing engagement even. It takes this very human concept of preference and experience and compresses it into information or data, trackable data, quantifiable data that we can use to then describe a person. I think very often what ends up happening is we tend to start re-describing ourselves through that same lens. You're exactly right. One reason why these algorithms are so successful is that they work in the same direction as business incentives. If you're an advertiser, it's better to have some discrete set of categories to advertise to. Additionally, these algorithms, something that's powerful about them, is that they tend to get people what they want in some sense. There is some way, when you get into a YouTube hole or a Facebook filter bubble, you're seeing things which are reacting to and you do kind of like. It's the same sort of relationship as junk food. It's not good for you. It has these detrimental effects. You can argue that these recommendations are doing a similar sort of thing. I think that's true, even for people like me who have highly academic subjects, for example, in my recommendations on my YouTube list, because I go and watch videos about algorithms or about history. Even for the people who believe that their feeds are diverse and represent a cultural world view that is sophisticated or otherwise not junk food, that is their junk food. In other words, maybe it would be good for them to see a silly YouTube video from time to time. Actually, that's the nutritious thing for their media diet. I think the counterintuitive thing that I found is that sometimes to get out of that bubble and to expand a little bit, you have to go against your preference. Because fundamentally, the filter bubble and all these sort of algorithmic traps that I think a lot of us are getting caught in are an output of us following our preference, just doing the thing that we want or seeing the thing that we want. But instead, you go against your preference, you design an algorithm that knows nothing about what you want. It's just giving you things randomly or through some other different metric. You don't have that same sort of problem. That's part of why I'm interested in randomness. I think that's a very powerful thing. It's not even necessarily new from a philosophical perspective that allowing ourselves to just go towards our brain. It's almost hedonism. The flip side of this is, for example, the stoic practice of practicing discomfort, like regularly sleeping on the floor when you could sleep in your bed, just to not necessarily harden your perspective or your emotions for that matter. But to more thoroughly understand a full experience of being human, rather than, you often can pacify yourself in a way by just following those preferences. So, I really want to kind of walk down this path with you regarding comfortability and preference. Why is it that comfortability can be so damaging and, most specifically, at a society level, what does comfortability do to us as a society? What happens to us when we walk down that path of just seeking out our own comfort? Well, I always think about this. I come from a computer science and machine learning background. So, I always think about it in terms of searching a space, which is the metaphor that machine learning researchers often use for solving an optimization problem. And oftentimes, the thing that is the most interesting or the best is on the other side of the hill. And you've got to climb up that hill to get to the thing that's better. And so, I think that there's a lot of value in being able to sit with something that's a little bit uncomfortable as a way of expanding the set of things that you're available to you, the set of options, if that makes sense. When I go out to a restaurant, previously, I used to choose a random thing off of the menu. And I found that that was a little bit too difficult to do in the moment, because I'd have to count through to figure out how many menu items there were and then pull out my calculator and choose a random number. And it got to be cumbersome. And so, I started doing something that was slightly different, but has a similar kind of effect of removing my choice from the situation. And that is to ask for the least popular item. So when I go to every restaurant, I go to, I ask, could I please have the least popular item? And I have it translated into a bunch of languages, if I'm somewhere, where they don't speak English, I can show the translation. And I just see what comes out. And it's always a surprise. And sometimes it's really awful. I'm trying to make it the worst thing. I got a, I was at a Starbucks in Japan and I showed them my translation. They were actually kind of excited about. They were like, Niki, no, no, no, you know, I'm least popular. So strange, and they ended up making me a Grande iced milk, just like milk with ice cubes in it. So it immediately becomes watered down. Yeah, it was really strange. But if I didn't have those sort of negative experiences in some way, negative experiences, I wouldn't be able to discover other really great stuff. And so I think that you have to have some ability to sit through discomfort to find those hidden gems out there. I couldn't agree more. I think it's also interesting to note, and this is more at a metal level, I guess, that just doing that gave you the opportunity to have that story as well. Right? So now you have this memory of this moment in Japan where you drink watered down Grande milk from a Starbucks. And you wouldn't have that memory. You probably would have had just another latte. Yeah, I think a lot about our daily routines and just how formulaic they are. You know, especially if you're like when I was working in a corporate job, and my day was totally determined by my work. You know, I'd wake up at a certain time so I could get there on time, the direction that you go home and the route you take is sort of hyper optimized to get you to the right place at the right time. And there's really not a lot of variation within that. But if you think about it, there's just a vast number of possibilities for what we could be doing. So you're missing out on a lot when your time is so regimented by routine. Today's episode is sponsored by Linode. Linode has been a long time sponsor of Developer Tea. There are huge reason why we have been able to continue doing what we do on this show. So thank you to Linode. If you've been a developer for very long at all, you know that having a Linux server in the cloud is kind of a step one thing. You really do benefit so much by having that tool available at your disposal. But not just any Linux server because there's plenty of providers out there where you can get a Linux server up and running. But Linode provides extra service on top of this. Not only do they have industry leading pricing for the Linux servers, but they also have 24-7 customer support and they have tools like Lish. What is Lish? Well, it's a special tool that if you're like me, you also make mistakes. You've probably locked yourself out from your server accidentally. Lish allows you to recover from scenarios like that. This is one of many tools that Linode provides to developers because Linode understands developers. Linode is made up of developers. They actually have open source tools that they create for you to interact with their services. Go and check out what Linode has to offer over its spec.fm slash Linode. If you use the code developerT2018 checkout developerT2018, you're going to get $20 worth of credit. What can you do with $20? Well, as it turns out, Linode's entry level plan is a $5 a month plan which gets you a 1GB of RAM server on their network. And you can get this for 4 months with that $20 worth of credit. Again, head over to spec.fm slash Linode to get started today. Thank you again, Linode, for sponsoring today's episode of developerT. I think maybe the average listener doesn't understand just how dedicated to this ideology you really are. I'd love to go through a few things that you have allowed the computer to take control over. There's a quote in the follow-up and visibility episode that I pulled out because I thought it was such an interesting, you said earlier that you anthropomorphize the computer and you said, I never let the computer down. So I think this is such an interesting concept. Is this a commitment for you that you've kind of made not necessarily a commitment to the computer, but a commitment to the philosophy? It seems that it is not only a commitment, but it is something that you're actively trying to find new ways of implementing, even though, for example, in your menu, asking for the least popular item, that's kind of a heuristic way of arriving at a random option. But the whole experience of going to a restaurant, you've now kind of given that over to this philosophy. I'd love to know a few other places where randomness is kind of the core concept and how you make choices. But then also, on the opposing side, what things do you believe that randomness has no role in for your life? Yeah. For the past, probably three years now, I've been really obsessed with this idea. And it started with these early experiments I was doing, studying means a random places in San Francisco. And I found that they provided something interesting for me. But I'm a real believer in taking an abstract idea and implementing it in your daily life. Because I don't think I think you can only get so far from theorizing about something. You have to actually feel what it's like to have that affect you. And so I have been building all these different programs that randomize different aspects of my life. So I can sort of try it out and see how it changes the way that I feel. So I mentioned the random place thing I have an app called Offbot, which will give you a random restaurant or cafe or business around you. And I use that to determine where I eat and sometimes, you know, what I do for fun. See I have an app that sends you to a random Facebook event near you. So that sort of gives you a social event in your area to go to, which is a transports unit in other world in some ways. I'm planning to get a random tattoo soon. So previously I was traveling a lot using this random program. It determined what city I lived in. And I would live in, you know, the city that the computer chose for a month and then move on to the next. And that was a really extreme form of randomizing my life. Definitely. I've decided to live in places for a longer amount of time. And so I'm in New York at the moment and I'm directing my research and randomness more towards the everyday sort of stuff. I've been getting really into randomized fashion. So I have a bot that searches Amazon for random shirts and I'm working on replacing every piece of my wardrobe with a random item. Getting a random tattoo. I don't know what else am I doing. Anything that's random, anything I can randomize I'm trying it out. Anything I can get my hands on. That's so interesting. And there's so many things that come to mind immediately for me that now I feel like I'm going to go and do this. You know, the most common example of this I feel like is, and this is something that you actually did in the invisibility episode. But the shuffle, the playlist, right? Shuffling a set of songs and you don't really know what's going to come next. And for that particular example, you used a all of Spotify, shuffling random playlist, you know, generator. Is that something that you still use? Yeah. And this playlist generator called Bailey Random. And if you type in Bailey Random into Spotify, you can still find it. It updates every day at, I think it's midnight Pacific. And it's actually really good today. I listened to it and it had this like German opera song followed by one of those like Halloween soundtracks with a cackling pumpkin or something followed by like a Merle Haggard country song. And it's sense it's choosing in a uniform random way from all of Spotify. You get all these weird, juxtapositions and you're exposed to things that are pretty outside of what you might listen to normally. Yeah. Yeah, it seems that, you know, with that kind of thing, with randomness as the driving factor, especially with that large of a set of options, that you very rarely, if ever, are going to hear the same song twice. Yeah, it's pretty unlikely, I guess. I don't know how big Spotify's library is, but, you know, presumably fairly large. And even if it was just all of the, you know, top 40 hits from the last 20 years, it would take quite a while to get through that. I feel like this is a problem for my probability class. I need to call it my professor. It's probably like more likely than you'd expect. That's usually how those things work out. Yeah, that's true. And especially if you listen to quite a bit of music on a daily basis. But the areas that, you know, that seem to be so set in stone, like I'm thinking about my exercise routine, for example, and randomizing the various exercises I could be doing or how I'm exercising, or even for how long I'm exercising. You know, one thing I actually did during a workout one time, I took a deck of cards. This is a very common workout, by the way. It's not a novel idea. And whatever card came up next, that's the number of whatever that exercise I would do. And so you'd work through the whole deck and you do various exercises. And there's something kind of energizing and interesting about that that keeps you waiting for the next thing. As simple and scoped as that is, I feel like, you know, if you were to blow that up on a much larger scale with exercise or with anything else, that you're kind of constantly looking forward to things. Yeah, I think there's something that's really seductive about novelty. And that's part of why I'm doing what I'm doing. I actually had an exercise generator for a while. And it didn't show me the exercise. It would use text-to-speech to yell out the exercise I was supposed to do. So I'd be, you know, running on a treadmill and say, do squat now. And then I'd have to go and do squats. And it was very chaotic and kind of crazy and not super practical, but it was fun. Yeah, I think that there is something that's really alluring about randomness because you never know what's going to happen and it kind of keeps you on the edge of your seat. As you're thinking about that, though, I am also recognizing some of the downsides of that, which adds a little bit nuance what I was saying before. Because there's another thing that does a similar mechanism and that's a slot machine, right? It's you're getting these random stimuli and that keeps you waiting to see what is next. And obviously a lot of people get very addicted to that and it has a lot of problems. And so I think like one nuance in the way that I'm thinking about this and it's something I've been thinking about more recently is how control has to work into it. Because it's fine for me to follow a random path because I'm the one that's deciding what the space of randomness is that I'm choosing from. And I can always turn it off. There's a risk, you know, if you let someone else randomize your life, you know, what direction they're pushing you in. A bit of attention there. Yeah, I know that that actually is completely relevant because I think, you know, this concept could become very similar. Like you said, a slot machine. It's the same reason why we get addicted to email because at one point in our career or in our life, we got an email that was filled with great news. Maybe you got the news of a raise in that email or, you know, maybe you got the first connection with someone who would eventually, you would date and then eventually you would marry. And so now your brain connects this periodic reward system. And so the same thing could be true with your random rest on suggestions. You're hoping and waiting for the next, you know, hidden gem, like you said. And I think that can be like you said that random reward would make us wait and anticipate perhaps to an unhealthy degree. Well, that's why I'm really interested in this idea of uniform randomness and going against bias because it's complicated because there's always bias in the world that you're sampling from. But I think that, you know, I think that it's really important with these systems that they be verifiably random in some sense that there's no one pushing their finger on the scales. Because if it's a uniform random sample of a space, then it's, it can inform you about the world around you. You can discover things in your neighborhood that you didn't know were there. You know, that sort of thing. But if it's being tilted in one direction or another, that can be problematic. Yeah, I actually had an idea for, because I live in Tennessee. And so we have a lot of hiking space around here. I had an idea for an app that would choose, you know, within bounds, latitude, longitude and would drop a pin there and then allow Google to tell me how to get there. And so the idea being that I don't know where all of the good hiking spots are, but filtering nature is kind of, it's kind of antithetical to the experience of being in nature. And I loved that idea. And I never actually implemented this, but it was, I feel like it's right in line with the concepts you're talking about that, you know, the true randomness there is that we're talking about the full scale of latitude, longitude, that it's not, you know, limited to a list of places that people have identified as, you know, this is an end point for you to go to. But instead, it's, it's actually a physical location that may or may not even have a name. Yeah, it'd be great. I think you should do that. Well, I'll share you wanted if I do. I'm maybe doing an open, open source kind of way. Yeah. I guess the truth is you could just create a number generator and then give it bounds to, to, you know, provide you the latitude, longitude back and then put that directly into Google. Yeah. There's a, I mean, there's issues with practicality and that's always the trade off, right? It's like, I could have my random number generator just give me a latitude, longitude on the globe and go there and that would be a uniform random sample of, of the worlds. But there's a lot of places that are really hard to get to. And so, um, the sort of compromise that I've made and I, I think it's a reasonable one is to look at what is possible. So I look at, um, flight data and, uh, bus routes and, uh, all sorts of transit data to figure out what parts of the world are accessible within my budget. And then I'll randomly sample from those places. And yeah, that gets me a little bit random while still making it so that I don't go bankrupt. Yeah. That makes total sense. And I, I think that it would be, uh, irrational at a, just at an economic level to believe that everyone can, can, you know, fund a fully random way of living, uh, just because there are things that would be, like you're saying, practicality wise, there are things that would be prohibitively expensive or dangerous for that matter, uh, you know, beyond a threshold that is, that is reasonable, you know, it's going to the middle of the ocean is probably not, you know, chartering or chartering a, uh, a boat to take you to the middle of the ocean, just you alone. That would be incredibly impractical. Let me know what's kind of interesting about using randomness in this way though is that it actually shows you those limits in a way that you might not know otherwise. Like I sometimes, it's very rare that I'll go against what the computer tells me because I, I feel like I need to follow through just to, to be true to this idea. But the times when it does, it's often a place where there's a real moral boundary or a safety boundary. Um, I found that like following this computer has allowed me to be more thoughtful about what I think is right and wrong. Um, and that was a big choice going into it. And this is something that you talked about in the follow up episode on invisibility as well, correct? Yeah, that was a big theme of the, of the follow up episode. Um, is, you know, can you ever go to a random place that is not okay? Right. Yeah. So, I think that's a big question for, for the person who is on the receiving end of that instruction, uh, you know, to, to determine for themselves. Absolutely. What I, what I like about this randomness method though is that it makes those decisions much more clear than they would be otherwise. There's, um, there's a way in which when you sample uniformly from a space, it's giving a mirror of the world that you're sampling from. Uh, I found when I was, when I was traveling randomly, um, I started to notice these patterns and the places that I was sent, which is surprising because it's, you know, it's random. It should be, you shouldn't really be able to detect a pattern. But I found that I was getting sent to places like London or Mumbai, India or Hong Kong that were previously a part of the British Empire. Interesting. Yeah. I started thinking about like, why is that? And well, of course, it's because I have this, this price constraint, which, um, is pushing me towards places that are more accessible on the global transit system. And they're all these, you know, old trade routes, um, that you get the world transit system. So there's a way in which when you're choosing randomly, you're, you're getting a picture of the space that you're choosing randomly from. Yeah, it's kind of revealing the contours that are often forgotten or missed that that that are actually reflected somewhere, right? There's something that is showing you, like you're saying, that mirror, uh, it's almost like a, um, a high detailed image, maybe like an HD heart of it, as cheesy as that sounds, um, showing you the things that you normally don't see, but it's bringing out those features more clearly because you're not thinking about, you know, the trade routes on a daily basis. That's not something that's going to cross your mind until you encounter it. Well, I think it's because our, our perception of the world is very biased by our experience and our preferences. You know, and when you go to a place, you see the things that you are familiar with and respond to. And so you're not actually getting an accurate image of how the world is. You miss out on these, you know, big patterns and, and larger trends and you, you don't allow yourself to see things that are right in front of you because they don't fit with your, your model. And, um, yeah. In some ways, hopefully choosing randomly allows you to break out of that a little bit because it puts you in direct contact with things that your preferences might have, led you away from. Thank you so much for listening to today's episode of Developer Teapart one of my interview with Max Hawkins. Hopefully you are as intrigued with Max's story and his tooling and all of the things that he's doing with randomness and his life as I am. And the conversation continues and continues to get better in the next episode of Developer Tea. So if you don't want to miss out on that, make sure you subscribe and whatever podcasting app you are currently using. And by the way, if you have found these episodes, you can find them in the description. Find these episodes to be valuable and you want to give back the best way to do that is to go and leave a rating and review in iTunes. Why is iTunes so important? Well, it kind of acts as the central hub for all of podcasting is where podcasting got its start and it's still where most apps actually pull their information from. So by giving us a rating and a review in iTunes, not only are you giving me direct feedback because I've literally read every single review in that list of reviews, but you're also increasing the probability that another developer, just like you, can find the show and get value out of it just like you have. Thank you to those of you who have left ratings and reviews already for the show. Thank you so much for listening. Thank you again to today's sponsor, Linode. Remember, you can get $20 worth of credit by heading over to Respect.FimSlashLinode in using the code developerT2018 all one word at checkout. Thank you again to Linode for sponsoring today's episode. Thank you so much for listening and until next time, enjoy your tea.