« All Episodes

Interview w/ Max Hawkins (Part 1)

Published 4/30/2018

What is your plan for today? Where are you going to go and what people are you going to interact with? A lot of the decisions we make become automatic. That's what we're talking about today with Max Hawkins.

Max was also on NPR's Invisibilia podcast and we reference his episode a few times in this part 1 interview as well as the part 2 of our conversation, coming out on Wednesday.

Today's episode is sponsored by Linode.

In 2018, Linode is joining forces with Developer Tea listeners by offering you $20 of credit - that's 4 months of FREE service on the 1GB tier - for free! Head over to https://spec.fm/linode and use the code DEVELOPERTEA2018 at checkout.

####Get in touch

If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea

Transcript (Generated by OpenAI Whisper)

I want you to take a moment and dedicate 100% of your brainpower to listening to what I have to say. It's kind of hard to do because most likely if you're listening to this episode of Developer Tea and it's not your first, then you probably have other things going on. You probably have a ritual, some kind of behavior that you've adopted that allows you to kind of put this show in the background. In a way, you have turned on autopilot. Now, I'm not talking to a specific person. I'm talking to everyone who's listening to the show because this is something that we share as humans. But let's say that you're giving this episode 100% of your attention. I want to ask you one simple question. What is your plan for today? What are you doing today? Where are you going to go? And, what people will you interact with? What time will you eat lunch? And, what route will you take to go from point A to point B? It can be a remarkably eerie and even unsettling idea that a lot of these decisions we make relatively automatically. We make the same decisions so many times that, again, our brain goes on autopilot and we aren't really fully thinking about what we're doing necessarily. That's one thing that Max Hawkins decided to change about his own life. And because we're so interested in this concept here on Developer Tea, I invited him to be on the show. My name is Jonathan Cottrell and you're listening to Developer Tea. And today's episode is an interview with Max Hawkins. Max was also on Invisibilia. If you haven't listened to that episode, it's probably a good idea to listen to that one in addition to this one because we do reference it quite a few times. My goal on this show is to help driven developers connect to their career purpose. For Max, Max has found a purpose for himself. We won't be giving that away in the intro. So I encourage you to listen to this whole interview. It will be in two parts. So if you don't want to miss out on the second part, go ahead and subscribe and whatever podcasting app you're using right now. Let's get into the interview with Max Hawkins. Welcome to the show, Max. Hey, how's it going? Oh, I'm doing good. I'm doing good. I'm doing good. I'm doing good. I'm doing very well. I'm excited to have you on the show. I think most, well, maybe not most, but quite a few people who are listening to this episode are probably coming over from Invisibilia and they probably searched your name, hoping that there was more content out there. I know after I listened to those initial episodes, well, the first episode and then the short follow-up episode, that I wanted to hear more. And, you know, I even recently, in fact, today, I went and downloaded the randomizer, the randomizer app, and that's not the right word for it, but the Facebook event random app that you had created and just really interested in kind of some of the philosophies and the ideas behind what you're doing. So I'm really excited to talk to you about all of those subjects in today's episode. Well, I'm super excited to be here. Thanks so much for the invite. I want to jump in kind of head first and kind of start us out on this path to talking about randomness, randomness, entropy, you know, whatever word you want to use. What word do you prefer? You use the word random quite a bit when you're talking about this stuff. Is there any kind of science behind what word you choose and how that applies? Yeah, there really does need to be a different word. I just haven't coined the alternate word yet. Randomness is a good one because most people know what it means, but it's also a bad one because it can mean multiple things. I'm interested in chance. You know, unexpected things coming from a place where we're not in control of the decision. There's everything in the set of things to be chosen is as likely as every other. But randomness can also mean like things that are unexpected, and that's not necessarily what I'm interested in. Sure, yeah, and that totally makes sense. Random events and more importantly, perhaps even unwanted events occurring that I think a lot of people might think about randomness in that particular kind of perspective, and that's not really what you're talking about. I see in my mind kind of a graph or maybe a histogram of, you know, events and their likelihood of occurring, or I guess events is not the right thing, but options, right, or pathways that you may take and their likelihood that you will take them. For the average person, and this is really kind of the thesis of the work that you do, right? The average person, that histogram is going to be very lopsided. There's going to be, you know, a very limited set of pathways that are commonly trodden that they take all the time. And, you know, I'd love for you to talk about, you know, what is wrong with that common path mentality? Well, I think it's not necessarily wrong. I think a lot of great things come from following the most optimal path to a result. And, but I think you also lose a lot. There's a trade-off between doing the thing that's most likely to get you the best result and sort of exploring more. And I'm curious about leaning into that exploration in the most extreme way. It's like, what is the opposite of optimizing for the best result? It's just saying every, every, every choice that you make is as likely as every other. Doing things randomly opens you up to that sort of serendipity. This, this is, I like the word serendipity for this because you very often, you kind of refer to your, your computation, or I guess a computer as almost in collaborative terms, right? So you allow the computer, we were talking before we started recording, you allow the computer to define where you will live. And so it does so using some set of probabilities. And, you know, in my mind, I'm just thinking of the JavaScript math dot random function, right? That there's some, some kind of weight that you're applying to all the options and then essentially spinning the wheel. So do you, do you see computers as a collaborative agent in this process? Yeah, that's the way that I think about it. Um, the reason why I tend to anthropomorphize, the computer, um, is because I think it, it makes it easier to wrap your head around what's going on. If you start to think about the computer as having intentions or as like wanting something that it allows you to focus in more clearly on the patterns that are happening. I, when I first got started with, with this whole path, uh, one of the first things that I did was I, I wrote a program that sent me to a random, um, business in San Francisco and, uh, it would send me to, you know, pizza shop in the mission or a FedEx Kinko's in, in the financial district or just, you know, some place that, um, was completely randomly chosen. And then what I tried to do was to assign some sort of meaning to that and see what it did. And, uh, I found that like this combination of choosing randomly, without, without any sort of my preference involved and without any, um, reason behind it. And then trying to figure out what the reason could have been is a really powerful technique for getting your imagination going. Yeah. I, I, I couldn't agree more with this and it's such a, such an exciting concept. We've talked on the show quite a bit in the past about, you know, perception of time and, uh, um, how people learn and, you know, experiencing new things, forcing your brain to start lifting a little bit of a heavier load. And the idea being that, you know, anytime you experience something new, you're forced, your, your brain is essentially experiencing, and this is, you know, all theory, of course, there's not a really a good way to prove it because it's all about perception anyway, but you're experiencing time a little bit differently because you're having to process those events more thoroughly than if you were to go down that, once again, that, that well-trodden path, uh, the way that you go most commonly. And so a very simple recommendation that I give people on a regular basis, developers or not, is to take a new way home. And that sounds so simple, but you know, you're, you're kind of the extreme version of the take the, take the, the new way home, uh, concept. And so have you found that it kind of has shifted your perception of, you know, how time passes or memories for that, for that matter? Do you have, do you have richer memories as a result of doing this? Yeah. Well, you know, I think the reason why I got started with this is because I, I really liked this feeling of, of novelty or, you know, discovering a new thing. And there is a way in which it focuses your attention when you encounter something new. Um, it, it gives you that sort of beginner's mind feeling where you don't really, because it's, it's something that you haven't seen before. You don't know what it is. And that makes it, there are more possibilities, um, embedded within that. Um, so I think like the, the reason why I got into this is because I was interested in that feeling. And the reason why I stuck with it is because I think that it's much more difficult to get that sort of serendipity in the world that we're living in right now, because our lives have become so optimized. You know, you, you log on to Amazon and it already knows what you want, right? Because the algorithm is figured out. That, uh, based on your previous browsing history, this is the thing for you. Um, and in some sense, they're right. Like that you look at it, you're like, that's, that's the thing that I want. But in another sense, they're not because it, there's no space for serendipity. There's no space for discovering something new. Yeah. I'm interested in, in reclaiming that a little bit. Yeah. I have, I have a question in my notes here that, that walks right down this path because, uh, I've done a little bit of study on recommendation. Engines and how those things work. Uh, and, and I'm very interested in it because it is such a phenomenal thing that we've been able to accomplish. And, and, you know, it, it, we could, we could open up the discussion towards, you know, machine learning or AI or whatever you want to call it, but ultimately it comes down to prediction. And so, um, I'd love to know, you know, I'd love to hear your opinion maybe of how we as developers, we may be able to be more responsive, more responsible with recommendations, uh, to introduce that serendipity because it's kind of a chicken and egg problem, or maybe it's just a, you know, choosing the thing you want to optimize. Uh, but think things that people are likely to enjoy. For example, uh, we, we end up going to kind of a local maximum for that person. Um, it could be that, you know, I, I've never taken dance classes and I have no intention of doing that, but it could be that I would enjoy, I would enjoy that immensely. And I just don't know. And nothing, no algorithm would be able to guess that about me because there's, you know, unless, unless something very strange that I don't understand about myself is true about many others like me, uh, that local maximum seems to be, it's like an echo chamber kind of limited limits us in. So I wonder, you know, is, is there something that we as developers can do because we, we really kind of drive some of this stuff, especially for developers who have more creative control over products, their, their building, you know, is there a way that we can recommend more serendipitous options for people that don't necessarily break so far away that they alienate people? Uh, or is this just a challenge that you have to accept that it's going to be uncomfortable? It's going to be something that is outside of your norm. And, you know, recommendation isn't really about making you happy. It's about making, making that option new and novel. Well, I do think that, as developers, we have a big responsibility to be thinking about this sort of stuff because, um, more and more, these algorithms are, are deciding what everybody sees and, uh, increasingly what everybody does. You know, if Google maps tells you to take a left and you, you follow the instructions. And, um, so it's important to be thinking critically about the assumptions that these sorts of algorithms are making. And, um, I, I, I believe that the critical oversimplification of the current batch of recommendation algorithms is that it's assuming a fixed set of preferences. It's saying like a person is, you know, this vector in a, in a vector space. And, um, everything that they're telling to us is sort of revealing that hidden latent preference. In my opinion, that's a huge oversimplification. Um, because oftentimes the computer will send me to things that I, you know, wouldn't have liked or previously didn't like. But then when I get that suggestion again, my mind changes. And, um, I, I think that everyone has done a real disservice by this oversimplification. Um, and I forgot your original question. so, so you said that we have a, a, uh, that, that the, uh, recommendation algorithms and, you know, the matrices of our preferences have us, you know, headed in one vector direction. Yeah. So there's this, this assumption, uh, at the core of a lot of these recommendation algorithms that our preferences are fixed, that, um, what you're doing when you're going to YouTube is putting in inputs that are revealing your true set of preferences, who you really are. And, uh, if you give YouTube enough data, they're going to, they're going to figure out what that set of numbers is, and they can give you just the right videos. But in my experience, that's not how it works. I mean, your preferences change. You discover new things that, um, push you in different directions. And I think it's really a problem, um, that the recommendation algorithms are pushing us into these, these bubbles, um, where we're seeing the same things over and over again, because it's, it's consuming that, that there is one set of things that we want. Yeah. Um, and there's another aspect to your question, which I've kind of forgotten. Um, well, I think overall that, that, that covers kind of the spirit of the question. And I think the truth of the matter is that these, these bubbles are, they're created for the, for the purpose of, uh, you know, lowering the dimension of complexity for messaging, right? So, so in order to reach something, it's, it's essentially taking what otherwise could be a very highly detailed and high dimensional, uh, set of preferences and reducing it to something that is, that is relatively predictable. And, and that's not to say that, you know, these, these, um, algorithms are not complex. Uh, that's not the message that I want to send. Uh, and in fact, many of them are pretty amazing in how effective they are in, for example, increasing sales or, you know, whatever, uh, a particular goal, increasing engagement even. Uh, but, but it takes this very, uh, um, human concept of preference and experience and compresses it into information or, or data, trackable data, uh, quantifiable data that we can use to then describe a person. And I think very often, um, what ends up happening is we, we tend to start redisciplining, describing ourselves through that same lens. Well, you're exactly right. And I, I think that, you know, one reason why these algorithms are so successful is that they work in the same direction as business incentives. Um, if you're an advertiser, you, it's easy, it's better to have some discrete set of categories to advertise to. Um, additionally, these algorithms, something that's powerful about them is that they, they tend to give people what they want in some sense. There is some way in which, like when you get into a YouTube hole or, uh, you know, uh, Facebook filter bubble, you're seeing things which are, you are reacting to, and you do kind of like, but it's, it's the same sort of relationship as, um, like junk food. You know, it feels good to, to eat junk food. But it's not good for you. It sort of has these detrimental effects. And, um, I think that you can argue that these recommendation algorithms are doing a similar sort of thing. And I think that's true. Um, even for people like me who have, you know, highly academic subjects, for example, in my recommendations on my YouTube list, because I go and watch videos about, you know, algorithms or about history or, you know, and so I think even for the people who are, you know, you know, you know, even for the people who believe that their feeds are diverse and represent, you know, a, a, a cultural worldview that is sophisticated or otherwise, you know, not junk food that that is their junk food. Um, in other words, maybe it would be good for them to see, uh, uh, you know, a silly YouTube video from time to time. Um, and, and actually that's the, the nutritious thing, uh, for, for their media diet. Yeah. And I, I think the counterintuitive thing that I found is that sometimes to, to get out of that bubble and to expand a little bit, you have to go against your preference because fundamentally this, the filter bubble, um, and all of these sort of algorithmic traps that I think a lot of us are getting caught in are an output of us following our preference, just doing the thing that we want or seeing, seeing the thing that we want. And, um, when instead you go against your preference, you design an algorithm that knows nothing about what you want. And it's just giving you things randomly or through some other different metric. Um, you don't have that same sort of problem. And that's, that's part of why I'm interested in randomness. I think that's a very powerful thing. And it's not, it's not even necessarily, uh, uh, new from a philadelphia, uh, philosophical perspective that allowing ourselves to just go towards our, and it's almost hedonism, right? So, uh, you know, the, the flip side of this is, you know, for example, the, the stoic practice of practicing discomfort, like regularly sleeping on the floor when you could sleep in your bed, uh, uh, just to kind of not necessarily harden, uh, your perspective or your emotions for that matter, but to more thoroughly, understand a full experience of being human rather than, you know, if you, you often can pacify yourself in a way, um, by just following those preferences. So, you know, I, I really, I want to kind of walk down this path, uh, with you, uh, regarding comfortability and preference. And why is it that comfortability can be so damaging? And most specifically at a, at a society level, what does comfortability, comfortability do to us, um, as a society? What, what happens to us when we've, when we walk down that path of just seeking out our own comfort? Well, I always think about, uh, this, I come from a, you know, computer science and machine learning background. And so I always think about it, um, in terms of searching a space, which is the metaphor that machine learning researchers often use for solving an optimization problem. And, um, oftentimes, like the thing that, that is, is the most interesting or the best is on the other side of the hill. And you've got to climb up that hill to, to get to the thing that's better. Um, and so I think that there's a lot of value in being able to sit with something that's a little bit uncomfortable as a way of expanding the set of things that you're, that are available to you, the, the set of, of, of options, if that makes sense. Um, when I go out to a restaurant, um, previously, I, I used to, uh, choose a random thing off of the menu. And, uh, I found that that was a little bit too difficult to do in the moment because I'd have to like count through to figure out how many menu items there were, and then pull out my calculator and choose a random number. And then it got to be cumbersome. And so I started doing something that was slightly different, but has a similar kind of effect of removing my choice from, uh, the situation. And that is to ask for the least popular item. So when I go to every restaurant I go to, I ask, um, could I please have the least popular item? And, uh, I have it translated into a bunch of languages. So if I'm somewhere, um, where they don't speak English, I can show the translation and I just see what, what comes out. Um, and it's always a surprise. And sometimes it's really awful. Um, I've, I've, I'm trying to think of like the worst thing. I got a, I was at a Starbucks in Japan and I showed them my translation. They were actually kind of excited about it. They were like, Nikki, no, no, no, no, you know, I'm least popular. It's so strange. And they ended up making me a grande iced milk, just like milk with ice cubes in it. Um, so it immediately becomes watered down. Yeah, it was really strange. Um, but, um, if, if I didn't have those sort of negative experiences in some way, negative experiences, I wouldn't be able to discover other really great stuff. Um, and so I think that you have to have some ability to sit through discomfort to find those hidden, you know, gems out there. I couldn't agree more. I think it's, it's also interesting to note, and this is more at a meta level, I guess, that just doing that, you know, gives you the opportunity to have that story as well. Right? So now you have this memory of this moment in Japan where you, where you drank watered down grande milk from a, from a Starbucks and you wouldn't have that memory. You probably would have had just another, just another latte. Yeah. I think a lot about our daily routines and just how formulaic they are. You know, especially if you're like when I was working in a, in a corporate job and, uh, my, my day was totally determined by, uh, by my work. You know, I'd wake up at a certain time so I could get there on time. Um, your, the direction that you go home and the route that you take is sort of hyper optimized to get you to the right place at the right time. Um, and there's really not a lot of variation within that, but if you think about it, there's just a vast number of possibilities for what we could be doing. So you're missing out on a lot when, when you were bringing your evolution, bringing evolution to your evolution, bringing evolution to evolution, bringing evolution to evolution, is sponsored by Linode. Linode has been a longtime sponsor of Developer Tea. They're a huge reason why we have been able to continue doing what we do on this show. So thank you to Linode. If you've been a developer for very long at all, you know that having a Linux server in the cloud is kind of a step one thing. You really do benefit so much by having that tool available at your disposal. But not just any Linux server because there's plenty of providers out there where you can get a Linux server up and running. But Linode provides extra service on top of this. Not only do they have industry-leading pricing for their Linux servers, but they also have 24-7 customer support. They have tools like Lish. What is Lish? Well, it's a special tool that if you're like me, you also make mistakes. You've probably locked yourself out from your server accidentally. Lish allows you to recover from scenarios like this. This is one of many tools that Linode provides to developers because Linode understands developers. Linode is made up of developers. They actually have open-source tools that they create for you to interact with their services. Go and check out what Linode has to offer over at spec.fm.com. And if you use the code DEVELOPERT2018 at checkout, you're going to get $20 worth of credit. Now, what can you do with $20, you might ask? Well, as it turns out, Linode's entry-level plan is a $5 a month plan, which gets you a 1GB of RAM server on their network. And you can get this for 4 months with that $20 worth of credit. Again, head over to spec.fm.com to get started today. Thank you again, Linode, for sponsoring today's episode of Developer Tea. So I think maybe the average listener doesn't understand just how dedicated to this ideology you really are. So I'd love to kind of go through a few things that you have kind of allowed the computer to take control over. There's a quote in the follow-up Invisibilia episode that I pulled out because I thought it was such an interesting... You said earlier that you anthropomorphize the computer, and you said, I never let the computer down. So I think this is such an interesting... an interesting concept. You know, is this a commitment for you that you've kind of made, not necessarily a commitment to the computer, but a commitment to it, to the philosophy? It seems that it is not only a commitment, but it is something that you're actively trying to find new ways of implementing, even though, you know, for example, in your menu, asking for the least popular item, that's kind of a heuristic way of arriving at a random option, right? But the whole experience of going to a restaurant, you've now kind of given that over to this philosophy. I'd love to know a few other places where randomness is kind of the core concept in how you make choices. But then also, on the opposing side, what things do you believe that randomness has? Has no role in for your life? Yeah, for the past probably three years now, I've been really obsessed with this idea. And it started with these early experiments I was doing, sending me to random places in San Francisco. And I found that they provided something interesting for me. But I'm a real believer in taking an abstract, abstract idea and implementing it in your daily life. Because I don't think, I think you can only get so far from theorizing about something. You have to actually feel what it's like to have that affect you. And so, I've been building all these different programs that randomize different aspects of my life so I can sort of try it out and see how it changes the way that I feel. So I mentioned the random place thing. I have an app called AwkBot which will give you a random restaurant or cafe or business around you. And I use that to determine where I eat and sometimes what I do for fun. See, I have an app that sends you to a random Facebook event near you. So that sort of gives you a social event in your area. And then you can go to which is a transport to another world in some ways. Let's see what else I'm working on. I'm planning to get a random tattoo soon. So previously I was traveling a lot using this random program. It determined what city I lived in and I would live in the city that the computer chose for a month and then move on to the next. And that was a really extreme form of randomization. So I was able to get a random tattoo and that was a really extreme form of randomizing my life. I've since decided to live in places for a longer amount of time. And so I'm in New York at the moment and I'm directing my research into randomness more towards the everyday sort of stuff. I've been getting really into randomized fashion. So I have a bot that searches Amazon for random shirts and I'm working on replacing every piece of my wardrobe with random shirts. So I'm getting a random tattoo. What else am I doing? Anything that's random, anything I can randomize, I'm trying it out. Anything I can get my hands on. It's so interesting. And there's so many things that come to mind immediately for me that now I feel like I'm going to go and do this. The most common example of this, I feel like, and this is something that you actually did, in the Invisibility episode, but the shuffle, the playlist, right? Shuffling a set of songs and you don't really know what's going to come next. And for that particular example, you used a all of Spotify shuffling random playlist generator. Is that something that you still use? Yeah. I put together this playlist generator called Daily Random. And if you type in Daily Random, into Spotify, you can still find it. It updates every day at, I think it's midnight Pacific. And it's actually really good today. I listened to it and it had this like German opera song followed by one of those like Halloween soundtracks with like a cackling pumpkin or something followed by like a Merle Haggard country song. And I think that's something that's really cool. And it's, since it's choosing in a uniform random way from all of Spotify, you get all these weird juxtapositions and you're exposed to things that are pretty outside of what you might listen to normally. Yeah. Yeah. It seems that, you know, with that kind of thing, with randomness as the driving factor, especially with that large of a set of options that you very rarely, if ever, are going to hear the same song twice. Yeah. It's pretty, pretty unlikely, I guess. I don't know how big Spotify's library is, but, you know. Presumably fairly large. And even if it was just all of the, you know, top 40 hits from the last 20 years, it would take quite a while to get through that. I feel like this is a problem for my probability class. I need to call up my professor. Yeah. It's probably like more likely than you'd expect. That's usually how those things work out. Yeah. Yeah, that's true. And especially if you, listen to quite a bit of music on a daily basis. But the areas that, you know, that seem to be so set in stone, like I'm thinking about my exercise routine, for example, and randomizing the various exercises I could be doing or how I'm exercising or even for how long I'm exercising. You know, one thing I actually did during a workout one time, I took a deck of cards. This is a very common workout, by the way. It's not a novel idea. And whatever card came up next, that's the number of whatever that exercise I would do. And so you'd work through the whole deck and you do various exercises. And there's something kind of energizing and interesting about that that keeps you waiting for the next thing. As simple and scoped as that is, I feel like, you know, if you were to blow that up on a much larger scale with exercise or with anything else, that you're kind of constantly looking for forward to things. Yeah, I think there's something that's really seductive about novelty. And that's part of why I'm doing what I'm doing. I actually had a random exercise generator for a while. And it was, it didn't show me the exercise. It would use text-to-speech to yell out the exercise I was supposed to do. So I'd be, you know, running on a treadmill and I'd say, do squats now. And then I'd have to go and do squats. And it was very chaotic and kind of crazy and not super practical. But it was fun. Yeah, I think that there is something that's really alluring about randomness because you never know what's going to happen and it kind of keeps you on the edge of your seat. As you're thinking about that, though, I am also recognizing some of the downsides of that, which adds a little bit of nuance to what I was saying before. Because there's another thing that does a similar thing! It has a similar mechanism and that's a slot machine, right? It's, you're getting these random stimuli and that keeps you waiting to see what is next. And obviously, a lot of people get very addicted to that and it has a lot of problems. And so, I think like one nuance in the way that I'm thinking about this and it's something I've been thinking about more recently is how control has to work into it. Because it's fine for me to follow a random path because I'm the one that's deciding what the space of randomness is that I'm choosing from. And I can always turn it off. There's a risk, you know, if you let someone else randomize your life, you know, what direction are they pushing you in? Bit of a tangent there. Yeah, no, that actually is completely relevant because I think, you know, this concept could become very similar. Like you said, the slot machine, it's the same reason why we get addicted to email because at one point in our career or in our life, we got an email that was filled with great news. Maybe you got the news of a raise in that email or, you know, maybe you got the first connection with someone who would eventually, you would date and then eventually you would marry. And so now, your brain connects this periodic reward system and so the same thing could be true with your random restaurant suggestions. You're hoping and waiting for the next, you know, hidden gem, like you said. And I think that can be, like you said, that random reward would make us wait and anticipate perhaps to an unhealthy degree. Well, that's why I'm really interested in this idea of uniform randomness and going against bias because it's complicated because there's always bias in the world. There's always bias in the world that you're sampling from. But I think that, you know, I think that it's really important with these systems that they be verifiably random in some sense that there's no one pushing their finger on the scales because if it's a uniform random sample of a space, then it's, it can inform you about the world around you. You can discover things in your neighborhood that you didn't know were there, you know, that sort of thing. But if it's being tilted in one direction or another, that can be problematic. Yeah, I actually had an idea for, because I live in Tennessee and so we have a lot of hiking space around here. I had an idea for an app that would choose, you know, within bounds, latitude, longitude, and would drop a pin there and then allow Google to tell me how to get there. And so the idea being that I don't know where all of the good hiking spots are, but filters go to, but instead it's actually a physical location that may or may not even have a name. Yeah, that'd be great. I think you should do that. Well, I'll share you on it if I do. Maybe do it in an open source kind of way. Yeah. I guess the truth is you could just create a number generator and then give it bounds to, you know, provide you the latitude, longitude back and then put that directly into Google. Yeah. There's a, I mean, there's issues with practicality and that's always the trade-off, right? It's like, I could have my random number generator just give me a latitude, longitude on the globe and go there and that would be a uniform random sample of the world. But there's a lot of places that are really hard to get to. And so the sort of compromise that I've made and I think it's a reasonable one is to look at what is possible. So I look at flight data and bus routes and all sorts of transit data to figure out what parts of the world are accessible within my budget and then I'll randomly sample from those places. And that gets me a little bit random while still making it so that I don't go bankrupt. Yeah. That makes total sense. And I think that it would be irrational at a, just at an economic level to believe that everyone can, can, you know, fund a fully random, um, way of living. Just because there are things that would be, like you're saying, practicality wise, there are things that would be prohibitively expensive or dangerous for that matter. Uh, you know, beyond a threshold that is, that is reasonable. You know, it's going to the middle of the ocean is probably not, you know, chartering or chartering a, uh, a boat to take you to the middle of the ocean. Just you alone. That would be incredibly impractical. One thing that's kind of interesting about, uh, using randomness in this way though, is that it actually shows you those limits in a way that you might not know otherwise. Like I, sometimes it's very rare that I'll go against what the computer tells me because I, I feel like I need to follow through just to, to, to be true to the, this idea. But the times when it does, it's often a place where there's a real moral boundary or a safety boundary. And, um, I found that, like following this computer has allowed me to be more thoughtful about what I think is right and wrong. Um, and that was a big surprise going into it. And, and this is something that you talked about in the follow-up episode on invisibilia as well, correct? Yeah, that was a big theme of the, of the follow-up episode. Um, is, you know, can you ever go to a random place that is not okay? Right. Yeah. It is totally a question for, for the person. Who is on the receiving end of that instruction, uh, uh, you know, to, to determine for themselves. Absolutely. What I, what I like about this randomness method though, is that it makes those decisions much more clear, uh, than they would be otherwise. There's, um, there's a way in which when you sample uniformly from a space, it's giving you a mirror of the world that you're sampling from. Um, um, I found when I was, when I was traveling randomly, um, I started to notice these patterns in the places that I was sent, which is surprising because it's, you know, it's random. It should be, you shouldn't really be able to detect a pattern, but I found that I was getting sent to places like London or Mumbai, India, or Hong Kong that were previously a part of the British Empire. Yeah, I started thinking about, like, why is that? And, well, of course, it's because I have this, this price constraint. Which, um, is pushing me towards places that are more accessible on the global transit system. And there are all these, you know, old trade routes, um, that make up the world transit system. So, there's a way in which when you're choosing randomly, you're, you're getting a picture of the space that you're choosing randomly from. Yeah, it's kind of revealing the contours that are, that are often forgotten or missed, uh, that, that, that are, are actually reflected somewhere, right? There's something that is showing you, like you're saying, that mirror, uh, it's almost like a, um, a high detailed image, maybe like an HDR image, as cheesy as that sounds, um, showing you the things that you normally don't see, but it's bringing out those features more clearly. Because you, you're not thinking about, you know, the trade routes on a daily basis. That's not something that's going to cross your mind until you encounter it. Well, I think it's because, you know, our, our perception of the world is very biased by our experience and our preferences. You know, when, when you go to a place, you see the things that you are familiar with and respond to, and so you're not actually getting an accurate image of how the world is. You miss out on these, you know, big patterns and, and larger trends and you, you don't allow yourself to see things that are right in front of you because they don't fit with your, your model. And, um, in some ways, hopefully, you know, choosing randomly allows you to break out of that a little bit because it puts you in direct contact with things that your preferences might have led you away from. Thank you so much for listening to today's episode of Developer Tea, part one of my interview with Max Hawkins. Hopefully, you are as intrigued with Max's story and his tooling and all of the things that he's doing with randomness in his life as I am. And the conversation continues and continues to get better in the next episode of Developer Tea. So, if you don't want to miss out on that, make sure you subscribe on whatever podcasting app you are currently using. And by the way, if you have found these episodes to be valuable and you want to give back, the best way to do that is to go and leave a rating and review in iTunes. Why is iTunes so important? Well, it kind of acts as the central hub for all of podcasting. It's where podcasting got its start and it's still where most apps actually pull their information from. So, by giving us a rating, and a review in iTunes, not only are you giving me direct feedback because I've literally read every single review in that list of reviews, but you're also increasing the probability that another developer just like you can find the show and get value out of it just like you have. Thank you to those of you who have left ratings and reviews already for the show. Thank you so much for listening. Thank you again to today's sponsor, Linode. Remember, you can get $20 worth of credit by heading over to spec.fm slash linode and using the code developertea2018 all one word at checkout. Thank you again to Linode for sponsoring today's episode. Thank you so much for listening and until next time, enjoy your tea.