Jonas Downey is the design lead at Basecamp. In this interview we discuss the ethics of designing and building constraints into your product that change human behavior.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com/contact.
If you would like to join the new experimental DIscord group, reach out at developertea.com/contact, developertea@gmail.com, or @developertea on Twitter.
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
So that's the problem that Twitter and Facebook find themselves in. They have an existing system. It has all these rules and already has all these constraints and decision decisions in it. And you can't walk those back because that's what they have. So all they can do is sort of like bump it a little bit in one direction or another, but it's not enough. Like it's not enough to really fix the root problems that came out of it. Thankfully now we've learned that. And like if someone sets out to make Facebook too or whatever, maybe they would take all that into account and build something that's healthier. That was the voice of Jonas Downey on the last episode of Developer Tea. And if you missed out on the first part of this interview, I encourage you to go back and listen to it. It will add a lot of good context for the second part. Jonas is the design lead at Basecamp. If you haven't ever heard of Basecamp, then just Google Basecamp. I guarantee you will find out a lot. We talk about Basecamp, we talk about hey, we talk about ethical design and all of the problems that we're facing on a very large scale. It's a very broad sweeping discussion that I've had with Jonas. And it was incredibly interesting and empowering. I encourage you to go back and listen to that first part. And of course, if you enjoy today's episode, I encourage you to subscribe in whatever podcasting app you currently use. And if you're interested in talking with other engineers like you who are driven, they're looking to become better at being engineers and at being people. I encourage you also to reach out to me for an invite to the Developer Tea. Discord community. Right now, I'm sending those invites directly to people who ask for them because we want to make sure that the people who are coming to this community are driven enough to reach out and ask for that. So you can reach me by emailing developert.gmail.com. You can also reach me on Twitter at app developertea.com or my personal account at J. Cutrell. Let's get straight into the interview with Jonas Downey. Yeah, so this is what is so interesting to me about this is that it's kind of a philosophical and psychological question of why is it that if you don't put these boundaries on, things kind of naturally go down the drain. I don't know if you saw that the AI generated tweet bot or something that Google released that just very quickly devolved, right? Yeah. It quickly became just not good. We'll leave it at that. And so I wonder, why is it that things don't trend in a positive direction by nature? And I'm not sure that I haven't answered that. I wonder if it's because there's a theoretical answer here, which is that the levers that give us money also happen to be the levers that cause fear or cause sensationalism or the things that our brains pay a lot of attention to, the attention economy doesn't really care what kind of attention it is. And it just so happens that we're very, we've evolved to be very fear oriented for our survival. And so if we've kind of figured this out in a time where we don't really need that fear survival mechanism, we're using it as a tool to make money from other people. That's one theory, but I'm interested in any theory that you might have. Well that's definitely a large part of it is that whether or not Facebook or Twitter and these platforms set out to grow the way they did is an open question, but they certainly did focus on using psychological, manipulative tactics in order to increase views, increase revenue, increase usage across the products. And they continue to do that to this day. That has been their sort of driving thing more than what they proclaim, which is like, oh, we make a platform that unifies the world and communication, all this stuff. Okay, yes you do, but also the way you're doing that is by trying to keep people glued onto these products as much as you can because that's your business model. And so that kind of goes back to where you're saying before, like the root of how you start influences all the decisions you make after that. If you start with that business model, you're going to have to make those calls that are probably not as good for people. And it's the same thing that like a healthy Facebook probably would never have grown to have like three billion people on it because like it just shouldn't, like if someone had started that in a different way with their respectful business model where you had to maybe pay to use Facebook or your data wasn't used to sell to advertisers or whatever, the economics would be different and it would be almost impossible to gather that large of a user base, which then in result would make it so you would have less of a likelihood of like affecting democracy in places across the world. So you can kind of see the root of it is that business model thing that like that is sort of the toxic core. And you can do lots of things on top of the toxic core to make it a little less toxic, but there's still going to be kind of like sewage viewing out. And I think that's what we have to learn from as platform developers going forward is like we've now seen that happen. And if we care about, you know, the good of people and we care about doing the right thing, we have to come up with other ways because that's just it's proven to not work. Yeah, so this is where the kind of rubber meets the road in a way because we can kind of see well what this necessitates in some ways is smaller growth like less growing platforms, less reach for these platforms. And potentially, you know, how do you how do you ensure that? Well, because if the door is open for somebody to come along and say, well, I'm going to make a thing that other people are making you have to pay for, I'm going to make it free again. Well, they did, it's just going to grow again. Right. And I think you're, I think your point about having some kind of like like the medical community has ethics boards and they have some level of regulation. I think it'd be interesting because we have regulation of this type for, for example, monopolies, right? We need to break up companies because they control pricing. Well, what's more important, potentially, I think could be the pricing or the control, the pricing control of our attention if one company can monopolize attention that much that's probably not good. But we don't have a mechanism right now, culturally, to even think about that. There's not even a discussion, a super cultural discussion about this topic. It seems to only be happening in subcultural ways. And specifically the way that we're talking about it now, the way that we, you and I, join us, we create things that are not that way. That's kind of our contribution to the world. But because we're creating things that necessarily require smaller user bases, how can this, how can we change, I guess I'm asking, how do you change the world, right? How do we fix it? So it's hard, I think the thing that's the most difficult is that especially the tech industry, but really, you know, capitalism in general is very focused on growth. That like growth is sort of the shining star that all companies and all governments and all economies aspire to. Like we have to be constantly doing more than we previously did in order to be seen as successful. And to me, that is fundamentally the root cause of why we have a lot of these problems. Because that, if you, if you have a growth at all costs mindset, you optimize to that. And you say, hey, we have a billion users. Well, we still better be showing stock growth. So let's get a billion and a half. And how do we do that? Well, what else can we juice to get 500 million more people in here? So like when you're guided only by that, and that's like the main thing, it leads you down some bad roads. So what we've done at base camp to kind of counter this idea is to reframe what we think is successful for us. Like we went entered into the email market where you have, therefore, very major big tech players that own easily billions of users. And we have no interest in being anywhere near that. For us, it was like, could we get 100,000 users? Like if we could get that, that would be like an incredible business. It'd be the most successful we've ever done. But that's because we prize staying smaller and being profitable and not growing beyond something that we want to run. Like I don't personally even want to run a two billion user email platform. Like that sounds terrible. So I don't know how we get that out there. I mean, we've been sort of selling this idea for years and years that like you don't have to do this endless growth idea. You don't have to dominate the world to be a success. It's okay to define what's good for you and say that's a success. But it doesn't seem to be the running expectation in the industry or in society at large. And I think the only thing we can do is just keep shouting from the rooftops about it. But it's hard to change people's perspectives because it's so ingrained in culture and in politics and just in our expectation. It's going to take a lot of people yelling to change it. Yeah, a lot of people yelling and potentially some level of collective cultural change to recognize these problems and to say, okay, as a society, we have to find ways. If the market is not going to regulate this, if we don't have a mechanism in the market to regulate these problems, then we have to do it outside of that. Which is not very popular in a capitalist society to regulate things because, hey, we got here because things are unregulated. Why would you put restrictions on this? Because we're all living better lives and we're adding value to the economy. Well, there are some things that are not worth trading. And I think that's, that takes time in a big cultural shift. It is definitely an interesting question. And I'm very appreciative that companies like Basecamp have decided to speak very vocally and find ways to spread that message both from the actual product level by building products that do this, but also in more philosophical ways, like releasing books and talking about the company's podcasts. And I think hopefully more companies are going to pick up that responsibility and continue to bring this up as a point of discussion. So it's not just tinfoil hat kind of people or at least that's not the perception from the public anymore. That's the hope is that I'm not crazy for begging my family to stop using this stuff, right? Starting, you're going to be changed by it. You don't know that you're going to be changed by it, but it will, it can harm your life. That's not just conspiracy, it's not just fear of talking, I've seen it. I'm sure that you have to. We'll get back to our interview with Jonas Downey shortly, but first I want to talk about today's sponsor Red Hat. Red Hat developer program brings Developer Together to learn from each other and create more extraordinary things faster. You can get free access to countless benefits to help you drive your career with Red Hat developer. You can access a full portfolio of app development products and tools for creating enterprise software built on microservices, containers, and the cloud. Everything from downloads to developer how to's and getting started guides to videos, books, newsletters, etc. You can get access to the gated contents of the award-winning Red Hat customer portal as well. There's interactive tutorials on the latest technology and Red Hat products. With Red Hat, you build here, go anywhere. Head over to developers.redhat.com slash about. That's developers.redhat.com slash about. Thanks again to Red Hat for sponsoring today's episode of Developer Tea. It's hard to get the message out when your computing gets these big platforms and the way things are. And virtual norms for that matter. Lots of all of our friends are on it and we have that one friend who's saying that it's bad. Right, exactly. Well, so your group of friends is right. Right. I've kind of been thinking of it from two angles. One is that we do have to constantly be speaking out and being a counterpoint and being counterculture in some ways. Eventually, the culture comes around like we were spouting about remote work 15 years ago and now remote work has actually happened. Obviously not necessarily by people's choice, but it came around. And so at the time, if you'd asked us like, is this remote work thing taking off? We'd be like, nah, not really. So it changes slow. It's not like it's going to happen tomorrow. But so one thing is speaking out, but another thing is continually making stuff that demonstrates and shows the value of this thinking. We want to make products that are respectful to do take care of people that are successful businesses that hopefully meaningfully improve people's experience with email. Even in a small way, like if you just saved 15 minutes a week that you're not looking at garbage in your email, like, cool. Like I'm glad we did that. So there's a whole gamut. I think another part of it is making sure that if you have some kind of authority or power at a company that you work at, that you bring in more voices that maybe aren't at the table that can have an influence over the decisions your company is making. That's also partly why we get things like Twitter being abusive to women and people of color because those people didn't have a seat at the table when the decisions were made about how, you know, Twitter's features work. And they may have been able to help guide the product leaders to figure out how to avoid those problems, but they didn't. And so now you have this thing. So we, you know, all this stuff, you can look at it and say, well, it should be regulated or there should be rules around it or whatever. But we can definitely still do grassroots improvements in a million ways by caring about these issues and talking about them. Yeah. You know, I like to imagine that there are some people in very high positions at some of these companies that actually do care that actually do want to change the tide and they do have the opportunity to bring those people in to help Twitter be less abusive, right? And then, you know, there is some small inflection point that's, you know, along the way where it's kind of growing linearly, this, this, you know, inside of a big company like this, the change grows linearly, then they do something that turns that change up, right? It suddenly catches on. It's like, ah, I see why having, you know, multiple voices in the room is better. Right. And how it's going to become policy or it's going to be, you know, it's going to be encoded into our culture in a more meaningful and consistent way. And that to me seems like the most likely route of change is kind of this, I don't know, turning the ship around because you're at the helm somehow. Yeah, I think somehow I think that is actually the harder road. Like, my guess is that the more likely thing is that someone will come up with, you know, better products and better ideas and those will eventually take over, you know, attention in the way that Facebook took over attention. I think it's harder to say given Facebook's history and its current leadership and the decisions they've made that they're likely to graft on, you know, a new group of people who's going to like, meaningfully affect what they're doing. It just doesn't seem like that's in their DNA at this point. It'd be great, like I hope so, but I don't think so. So, you know, more than likely we're going to just have to kind of wait them out and then, you know, whatever next year's TikTok is or whatever other things will come along and, you know, Facebook may well get broken up eventually by the government who knows. And it seems unlikely they're going to be dominant forever, you know, it's all these things eventually lose their favor. But yeah, I mean, the answer is all of it. Basically, that's like, speak out, you know, if you work at a company, try to make changes. If you don't work at a company, you know, do the best you can to make the decisions that are good. Yeah, so this, we've turned this conversation into kind of a long discussion about how we need to change things. I'm interested in some really practical advice for engineers from your perception or perspective as a designer. You know, I want to care more, right? But I'm going to play the engineering role. Let's say I've just joined a company. I want to care more about this stuff. And I'm not really sure how to think that way yet. Right? I know how to do my job. I know how to pick up a ticket and work on it and then ship it. And I'm good at that. I can write code, I understand, you know, all of the requirements. But I'm not sure how to think more like a designer. What are some cues or questions or just, you know, what can I be thinking about more often that will help me think more down this road, this, this way of thinking as a designer? Right. I would say the main thing is to think about the experience that someone is going to have when they interact with what you're building. First of all, which you may not be doing if you're focused on purely on the implementation of the thing. So if your traditional process is that a designer and some product owners have figured out what they want and then they sort of hand it to you on a platter and then you make it, there's no point in that process in which you're like thinking about how it works and whether you should do that or not. So the first thing is just asking, like, is this something that is good? Like, does this do what I think it should do? You know, when we get to the end result of this, is this going to be something I can stand behind and say that like, I like this. I worked on this and I'm proud of it. That's part of it. Another part of it is really thinking about which aspects of your product are using tactics that you could think of as being manipulative tactics. And those manifest in a bunch of different ways. Some of them are things like a sticky interface, like a thing that nags someone to come back and look at the software again. So like, if you're designing a push notification, let's say, you're like, oh, we're going to send out a push notification every time someone does this in our app. Then you think about, okay, well, if we do that, let's say we just add this one push notification and we have 30,000 users and we send that out. We're going to send out 30,000 push notifications every day, which means we're interrupting people 30,000 times every single day. Like, is that what we want? Like, have we considered, you know, the side effects of that? Is it worth the interruption? Like is what we're getting out of this going to be valuable? And you know, you could say like, well, hopefully the product owner and designer and CEO, whoever signed off on this earlier in the chain, would have thought that through. Hopefully they would have, but if they haven't, or if you're building something that you think is not what you personally would want in this product, push back on that. You know, go back to the designer and say, hey, can you explain this to me? Like, tell me why we're doing this. You can get into it a little bit. At some companies that sort of feedback may not be welcome, but if, you know, if you think about it more and you start to feel like, hmm, we're doing stuff that I don't agree with, then you kind of have to reckon with, you know, what you want to do next. So the main thing is just asking questions and being curious and pushing back is hard and it also requires some degree of relationship between the programmer and designer that are working with. Hopefully there is like a back and forth thing. But yeah, I would say those are the two main things. Like look for manipulative tactics and think about like the experience that you're building. Yeah, that's great advice. I think looking for manipulative tactics is definitely something that takes time to build that skill to notice, ah, this is, this is manipulative. And there are, you know, there are things that I've seen built into products that are subtly that way, right? They're not overtly manipulative. It's, ah, they're pushing this person this direction. You know, there's, there's the overt stuff like when you see a pop-up that says, no, I don't like money. You know, I pass a progressive pop-up. Exactly, obvious, when that stuff happens. But there's less obvious things like having default options, right? Deploy options that are, you know, choosing maybe the highest cost plan for something. Well, that may not necessarily be good for that, for that person. And by choosing that as the default option, you're kind of saying, well, the people who choose this, um, choose to work with us, this is the kind of person that should, right? And maybe that's not true, and maybe the best plan for them is actually the cheapest plan. Right. Or maybe you don't even need to use your product at all. By, by having defaults, you're, you're biasing people in one direction. And there are certainly times where this is important in, in, it's part of the toolkit, right? Like, I can understand, um, having useful and, and positive defaults. But yeah, it's very difficult. There is a line and knowing where that line is is very context dependent. You have to be able to think really thoroughly about who is actually looking at this thing, right? And are we actually pricing things fairly? That's another very simple, you know, that, that's something that a software engineer you could, you may not be able to, to start that conversation as a brand new software engineer, but perhaps you should be able to, right? This seems like it's overpriced for, for this, for those good people. Um, and I, you know, one other thing I wanted to mention, or maybe ask you about, it seems like you do a lot of thinking that is beyond the immediate experience. So in other words, okay, well, I, I want to check my email. I don't want it to be stressed when I check it. Cool. All right. We're going to take away numbers. We're going to, you know, design the experience in a way that's more mindful. Great. But it seems that you and base camp more generally and, and good designers also think, okay, what about in three hours from now or what about when there's a very important email that comes through 10 minutes after they check it? What happens? And what about when they're on their phones in transit? There's, there's the second order effects, and third order effects. What about when somebody else is using a different email client? And now you have these two people that have totally different perceptions of what email or how email fits into their lives. How do we reconcile with, with those? So I'm interested in this idea of second order effects. How do we think more thoroughly about the user's experience rather than just when they're using our product for those three or four minutes beyond into their lives? Yeah. So I think there are kind of two parts to this. One part of it is when you're thinking about things like defaults or things like, you know, notifications or interruptions or whatever, you can almost always think of those things from two angles. One angle is the person first angle, the person who's using this thing. What would be appropriate for them and what would best fit into their model of how this product works and fit into their life at large? Like what do you, what would you want them to experience? And then the other side of it is the business first view that like we want to get more people using this app. We want to get more people paying for this app. We want to get people to automatically upload their contacts when they download it so we have more information about their contacts. Those are business problems less than customer problems. And so you have to balance those two things because a lot of times we do a lot of business first stuff without really questioning what we're doing. Like we'll just send out an email campaign or we'll just add notifications and then every time we get an email, we'll just get a notification and it's fine. You know, those are sort of lazy decision making things. They're like sure it might improve your business performance a little bit. But the experience that you then gave to people to use is not great. They're going to like it and it doesn't push the needle forward. So for something like notifying about email, what we did in hay was that we designed several different workflows for different types of email. So that before any person can email you, you have to say if they're allowed to. So when the first email you, hey, ask you, do you want to hear from this person or not? And if you say no, you just don't ever hear from them again. They're out. They're blocked. If you say yes, you can pick where that person's email should go into a number of different spots. And then within those spots, you can decide if you want to be notified about certain things or not. So you can be notified about emails from a certain person or from a certain thread that you're on or from like anything that you've sort of designated as important. And that's it. So there's no like blanket notifications on switch in the whole app. You can't just like get notified about everything. You have to be a little bit more thoughtful about like what's worth interrupting you. And it's your decision. It's not our decision. We didn't say like, oh, we have a very smart artificial intelligence and we've decided what you'll get notified about it. Well, we probably can't. Like no matter how good the artificial intelligence is, it's not going to exactly match your mental model of what's important to you. So our solution was to give you the power to make these decisions rather than setting the default to be like you just get blasted because like that's the default that everybody just accepts. So like, well, I just have to get blasted. We don't think that's true. Like we don't, you don't have to accept that. So I think those are kind of the different parts of it. It's like thinking through what experience you want people to have when they use the product and then thinking through like, how can we design this in a way that drops these assumptions about how these things should work and actually does what we want it to do. And sometimes it's harder. It's not the easiest road because like turning on push notifications like across the board is easy. But doing the harder things is usually the right thing. Right. And so one thing that stuck out to me and what you said there is, well, no matter how good our AI gets, it will never be able to totally understand like this very, very important thing in your daily life email. It's important and it's also dangerous, right? And so we, it would be dangerous for our AI to pick up the ball and say, well, we're going to decide what you're going to see. That seems like a bad idea. And so in some ways, the hard solution, which is trying to build an AI that can understand what's important to you is also a bad solution. So it's like, right, turns out that the good solution here is also somewhere between the easiest thing, which is turn all notifications on and the hardest thing, which is to try to build a robot that replaces your judgment, your human judgment. And it actually is a mechanism to allow that person's judgment to shine, right? Right. Give keys back to the user in a way. Yeah, part of this thing too is that as technologists, I think we are always eager to throw more technology at problems and say, well, the reason that email is bad is because we haven't been smart enough about training computers to like filter through email for us. And it's like, no, actually, the reason email is bad is because we, people spam the hell out of each other with really useless email all the time, especially us in marketing and in product companies that blast out campaigns and do all these things. So it's not so much that we need to train computers to be even better at email. So we need to empower people to fight back against the junk that they have been basically having to settle with for the better part of a decade. And like, everyone's got used to like, well, I just get all this trash and I hate it, but it is what it is. Like, it doesn't have to be what it is. We can give you tools to make it not be gross. You know, that's like, that's our job as designers. It's a think about those circumstances and say like, why is it like this? Like, why have we just like accepted that it's gross? And I think to me, the answer is rarely that like, I want machines to do more thinking for me. Like, in some cases, perhaps if I'm doing like sophisticated mathematics or something fine, but in terms of like my day to day experience using a product like an email app, I want to have control over it. I want to know where that email went. Like, we use Gmail. It decides what categories things are and it's hard to get it to change its mind because like, its mind is made up. You don't really actually have autonomy. And it's a weird feeling to be a user of a thing where like the thing is smarter than you. And so we try to place all of that back on the people and say, no, you can make decisions. Like, we trust you to do the right thing and we're going to make it easier to make those decisions so you don't have to like configure 40 options instead of complex rules about where emails go and all these things. We just, we give you the workflow and then you can just do it. Yeah. There's some level of design thinking that begins with understanding what you assume about the problem, right? Okay. Well, we assume that you need to see all of your email. Well, that's probably a bad assumption. The vast majority of my email, I don't need to read. And probably I should never even let it in my inbox to begin with, but I made a bad decision three years ago and have an unsubscribed from that particular list yet, right? It would take me a long time to go through and unsubscribed from it, but realistically I, there's a very small percentage of my email that I really need to see. So if you, if you design a product with, you, you will design a product with a lot of assumptions in mind. For example, the average person who's listening to this podcast is probably going to design a product with the English language as the primary language in the product, right? And it's most of these assumptions tend to pan out, okay, English tends to be a good language to start with for a new product, especially if you're targeting people in the United States, right? These are pretty obvious, but as it turns out, you know, these are assumptions that we're making. And there is probably, you know, for most products, there is probably a line that gets crossed when we make those assumptions that if we were to roll it back a little bit, right? Wait a second, are we sure that they need to read all these emails? Are we sure? Yeah. If you, if you kind of roll that back and say, no, maybe not. Well, we could take, we could change, we could, you know, take a different path in that fork in the road, right? Right. That assumption leads to a bunch of other design decisions that have to make up for that assumption. And we have to create these folders inside of folders to deal with all of this droves of email that they're never going to read, but we need to give them a way to organize it because, oh, no, what happens if they lose that coupon code from three years ago from a date us? Right. What are we going to do? Yeah. Leave us, you know, where attrition rates are going to go through the through. No, that's not going to happen. They're just going to be a healthier person probably because they don't have this. Yeah. And part of this is also the power balance that exists in these things. So traditionally in email, the power balance is very one sided. It's the sender has all the power that you, if you are like an email marketing company, you can, you have a mailing list. You can send out, you know, hundreds of thousands of emails and pretty much be sure that they'll all get delivered if it's a decent mailing list. And the people who receive those are kind of stuck like they can maybe unsubscribe from it, but that doesn't even always work. It's not very obvious. It tends to be downplayed in most emails because the people who send the email don't want you unsubscribe so they don't make it real easy. And it's, it's lopsided. So that's the, the effect that you get is that when the sender controls everything, the receiver just has to take it and then they're inundated with stuff. So we try to tilt those power dynamics and hey and say, okay, as the receiver, you can say no, you can block those things. You can say, sure, I still want this, but I don't want it to bother me most of the time and put it in a different place. So if it comes in, I can still get to it maybe if I need to, but I'm not going to care. There's just those basic simple human things. It's the same thing that happens if you go to your mailbox and you have two pieces of mail that are important and five that aren't, you just throw the five in the trash probably. Right. It's the same deal. Like you, the sender sent you stuff and it was your decision to have to deal with it. And we just try to think of that from a different perspective and say maybe it doesn't have to be like that. Yeah. And our, you know, it's asymmetric consent, right? Yes, yeah, it's consent. As I consent once, right, I gave you my email four years ago and all of a sudden you're going to pop up in my consciousness, you know, right? And that seems like asymmetric consent to me. I feel like you should have to ask for my email again. Yeah. Well, even the work you have to do to go and get off a list is unknown. Like you'll click through and then a lot of times those unsubscribed sites tend to be designed with dark patterns to try to keep you on. Yeah. And you're right. Right. I mean, your favorite. So it's like it's work to even stop it. To like in order to get it to stop, you have to do even more. And like that it's just another place where it's it's asymmetric like you're saying. Yeah. Yeah. This has been an excellent conversation. I want to wrap up with you here. I have one final question for you. I wish we could talk about this stuff for a long time. Oh, and I also should say that basically certainly has not sponsored Developer Tea anyway. This is all voluntary discussion on both parts. But I do have one final question for you. If you had 30 seconds to give software engineers, developers, whatever their background, the level of experience, if you just had 30 seconds to give them all, I'd advise. So what would you tell them? My main advice is to chart your own path. I think that it's very tempting when you're young and you're coming into this industry to think that the only route is to go the common route. Like go work for a big tech company, go get a job that pays a lot of money. You know, go do the things that you hear are the things that you do as a tech worker in this industry. And I don't I think there's another way to do it. And the other way to do it is harder, but it's much more fulfilling, which is that look for companies that speak to you, that stand for things that you stand for. If they don't exist, make a company that does. And like these are going to be harder paths to do, but we need people who believe in things and we need people to be risky and step outside what's expected. And I think that's my advice. Just go figure out what feels right. And if something doesn't feel right, like don't feel like you're obligated to go do that just because that's like what you thought you had to do. Yeah, yeah, that's great advice. Jonas, thank you so much for joining me on Developer Teawhere can people find you online? I am at Jonas Downey on Twitter and you can email me Jonas at hay.com. Awesome. Thanks so much, Jonas. Sure thing. Thanks for having me on. Thanks again for listening to today's episode of Developer Teapart two of my interview with Jonas Downey, a reminder that we are kind of in the middle of our COVID series or what life is going to be like as an engineer after COVID is over, whereas the pandemic begins to come to a close. So if you don't want to miss out on those episodes, go ahead, subscribe and whatever podcasting app you currently use. Thank you again to today's sponsor Red Hat. Whatever to developers dot red hat dot com slash about to get started today. Thanks so much for listening to this episode. If you would like to join the discord community, send me an email at developertea@gmail.com where you can reach out to me on Twitter at Developer Tea. Also we'll be back on Friday with another Friday refill. So look forward to that. Thanks so much for listening. Next time, enjoy your tea.