In today's episode, I talk with one of the most influential voices in technology in the last 20 years - Kevin Kelly. Kevin is the author of "What Technology Wants" and "The Inevitable", co-founded Wired magazine, and is now leading the charge of optimism as it relates to the future.
Today's episode is sponsored by Rollbar. With Rollbar, you get the context, insights and control you need to find and fix bugs faster. Rollbar is offering Developer Tea listeners the Bootstrap Plan, free for 90 days (300,000 errors tracked for free)! Head over to rollbar.com/developertea now for the free 90 day offer!
Transcript (Generated by OpenAI Whisper)
There's a stoic exercise that you can do to help you gain a little bit of perspective. Basically, what you do is you zoom out in your mind, of course, out above your head and then out above whatever building you're in to the neighborhood, maybe to the city, then to the state, and then zoom all the way out to the whole world and then further keep on going. And the whole point of this is to provide you with some kind of perspective on both your own place in the world but also the sheer magnitude of what's around you. In today's episode, I want you to allow yourself to zoom out to that perspective. I want you to think much larger than your own career today, just as an exercise to gain a little bit of perspective. And I want you to ask big questions because today's episode is an interview with someone who does exactly that. His name is Kevin Kelly. Kevin is always thinking about the future. He's always thinking about how technology is going to shape our future and various parts of our culture. Kevin is probably most well known as an author. Most recently, he wrote the New York Times bestseller, The Inevitable. And before that, he wrote, what technology wants? And before that, in 1993, Kevin co-founded Wired and then served as the executive editor for its first seven years. But the fact that I think is most intriguing about Kevin is in all of his thinking about the future. He remains eternally optimistic. And there's a reason for that, not just because he's a positive person, but we're going to get into that in the interview. I don't want to give too much away. I'm so humbled and excited to have Kevin on the show. He's been very much so a guiding light for the way that I think and believe about the world and about technology and our place in between those two things. So thank you so much again to Kevin for joining me on Developer Tea. I encourage you to pick up pretty much anything he's written, but what technology wants. And of course, his most recent book, The Inevitable. Now let's jump straight into this interview with Kevin Kelly. Welcome to the show, Kevin. It's my pleasure to be here. Thanks for inviting me. I'm really excited to have you on. I and my colleagues at Whiteboard. We've been very big fans of the work that you've done. The books that you've written and your perspective and generally the idea that optimists, they hold the keys really to creating the future of technological advancement. And you you quite literally write the book on these topics. So it's really exciting to get a chance to talk to you here. I'd love to start out with a question. And it's kind of an open-ended question really, but it's very simple. What is one thing you wish more people would ask you about? Sometimes the simple questions are the hardest answer. Yeah, because in fact, pretty much something in the mode of being interviewed, I'm being asked about everything. It's really hard to think what I wish they would ask me more about. You know, I'm going to answer this question in a different way. I'm going to answer it and tell you what I wish people would not ask me about. Okay. Sounds great. And that is there's a kind of common fear, or scenario right now, of the robots taking over and taking all our jobs. And I get that question every single time. And the reason why I kind of am a little tired of the question is because I think the questions coming from Hollywood movies. I think it's I think I don't know if people are really afraid that the robots are going to take us kill us all and take over. But they ask because they saw an terminator or there's just science fiction movies and or because other people are asking or something. I find it hard to believe that people really have that fear. Right. Yeah. But they feel as if they need to ask that question. So, you know, maybe I'm wrong. Maybe people really are afraid that the robots are going to kill them all and take over. But it doesn't feel like that. I mean, I don't know. I feel as if this is sort of like a required a question that they should be asking. So they ask kind of like they're playing their part in that movie. Yeah. Exactly. Yeah. So a very interesting thing because in you talk about this in larger forums because so many people are asking that question. We certainly are not going to focus on that question. I think a really interesting part of that is also, you know, again, going back to the fact that you believe in optimism as a driving force for the creation of the positive future. Right. And this fear driven question of are the robots going to take over kind of gives up the power of creation. It gives up the ability to drive that right to make that decision as humanity progresses. Are the robots going to take over and steal our jobs and and you know, come and kill all our crops and drive us out of our homes. That really kind of presupposes that you are not participating in the creation of those robots to begin with. And then maybe part of it. But I think actually, you know, to be honest, I think people like myself and science six and writers have failed to provide an alternative optimistic view of the future that somebody could believe in and would also want to live in. So so there there really aren't any detailed fleshed out visible scenarios or movies about the future on this planet that people want to live in. And I so I think people kind of retreat to this other one because they don't have much of a choice. And so people like myself or science fiction authors have failed to produce a viable vision of the future that is desirable. And so part of what I'm trying to do in this book is to you know, begin to try to outline what I would think is a version of the future that is optimistic that you'd want to live in. And that's plausible in a sense given what we know of where things are going and where we're headed. So it's not like it's an alternative future where we don't have technology. You know, I'm saying we have more tracking and more AI and more virtual reality and you pick with us this than that then. I'm trying to make a version where say, you know, I think I think we can make this work. I think this would be a good place I want to live. But we haven't I haven't been able to flesh it out like a way of movie would be. And that is something that is in some senses a failure of people like me and other science fiction authors. Well, and perhaps the perhaps the inevitable will be good material for those science fiction authors to draw from and kind of like raw material. It seems like it could be that, right? Yeah. In fact, I joke with my science fiction author friends. Yeah, here it steals much of this as you possibly can. And so kind of like the groundwork of the worlds that they can write on top of. Yeah. And that's sort of my next project is to do a bit more world building to try and make something that is more coherent. And it has a history too. That's the other missing element. For most science fiction worlds is they don't really have a history. So, um, history meaning that they that they're only kind of one year, but we're to really make a world deep enough. You kind of have to have, you know, what what was it like 10 years before that and 10 years before that because the things don't arrive. What is the enumology? Yeah. Well, yeah, I mean, it's like if we look around, we don't, not everybody's in a flat, you know, not everybody's driving a car. People still riding bicycles and they're still old cars and still people walking and they're still, you know, there's a cave only a segue and skateboard. And you know, so it's all the precursors to that technology kind of exists alongside the new stuff as well. So, but but important thing is is that I, I'm trying to urge more people to be involved in trying to develop visions of the future that we want to live in. It's not difficult to imagine a world that we don't want. The dystopian world's, it's actually not that difficult to imagine lots of them and that, but that and they make great stories. Sure. The masterful stories tellers will always kind of gravitate to dystopia because there's much more built-in conflict and drama. Yeah, there's opportunity for heroes and those exactly, but I think we need more visions of a future that we would want to live in. And unless we give them, people are going to retreat and just naturally assume that the the worst. Yeah, this reminds me and all of your work really reminds me of one of the great computer science pioneers, Alan Kay. He has this quote, he says, the best way to predict the future is to invent it. And this is so compelling for me as a software developer and for the people who are listening to this show, largely are software developers as well. I'd love to ask you kind of a more driving question for you, kind of a deeper question. I want you to talk about what drives you to care about being a futurist and reading through your biography. You've really lived 100 lives, it seems. There's so many different things you've done. You have so many experiences. Of course, acting as senior executive editor of Wired, but also doing things like starting an index of every living species, right? Extensive research on things like digestion at the microscopic level and also writing multiple books, including your graphic novel, The Silver Accord. But I'd love to know all of this kind of has one thing in common and that is looking into the future and caring about the future. Can you kind of tell me about the moment that you realized that you cared so much about the future? Well, that's the way it's answered. There was a moment when I had a self-assigned experiment to live as if I was going to die in six months. And as part of that process, I had to surrender the future each day. And by the end, I basically had eradicated my future because I was taking it seriously and preparing not to have any days beyond the last day. And what that made me realize was that having a future was an essential part of being human. That you cut away any hope of the future. You're just less than what we are meant to be in that being able to look forward into something is an essential part of being a human and being alive. And so I began to think that it was less kind of like an indulgence or a past time, but it was more essential to our being. And then as I got older, technology seemed to be accelerating. So there was this new sensation that, oh my gosh, my life is going to change. The world is going to change within my lifetime. I mean, it won't be anywhere like the same world that I'm going to die in as I was born in. And so therefore, in order to be of service to the world, which is changing very fast, I have to kind of think, well, where's it going to be? Because you have to, you know, to things take time to do. So therefore, they changed before you're finished, them so you are obligated in some senses to start to think about the changes in the future. And seems that parts of the future are changing even faster than before. And so that responsibility, I guess I would call it, of looking into the future in order to do good work, seems to me to now be a generic general assignment that everybody needs to follow. So to be a good citizen, you essentially have to look into the future a little bit. And taking the long view, which I think we should do more often, which is that being involved in things that may not be done within your lifetime, you know, I mean, we are the benefit of people who have took the long view, who built roads that were bridges or cities that took a long time to build. And we're built with the idea that they'd be around a long time or worked on something that was not even finished in their lifetime. And if you take that view, which I think we should more often, then you definitely have to think about the future. And so I think to be a responsible human in 2017, you need to think about the future, so some extent. We're going to take a quick sponsor break and then we'll be right back to the interview with Kevin Kelly. Today's episode is sponsored by Rollbar. With Rollbar, you can see what errors are lurking in your code without you realizing it. You know, dealing with errors is really difficult sometimes, right? You have to rely on your users to actually report errors to you. And of course, at that point, you've possibly lost business where you've lost a client if you're in an agency position. You have to dig through logs trying to debug these issues. You start adding new code to your projects just to figure out what's going wrong with the code that already is there. With Rollbar, you can integrate with all major languages and frameworks and start tracking production errors in just a few minutes. You can integrate Rollbar into your existing workflow. And you'll send errors and alerts to your favorite services like Slack or HipChat. And you can link the source code and GitHub, Bitbucket, or in GitLab. You can even turn errors into issues in Nigeria and Pivotal Tracker and Trello, all of the things that you're probably used to using. And of course, there's so many integrations that we can't list them all in this ad read. Go and check out what they have to offer. Some of their customers, by the way, include Heroku, Twilio, Kayak, InstaCart, Zendesk Twitch. I've used basically all of those in the last month or so. Go and check it out Rollbar.com slash Developer Tea. That's rollbar.com slash Developer Tea. And you can get the bootstrap plan for free for 90 days. Thanks again to Rollbar for sponsoring today's episode of Developer Tea. Yeah, that's excellent, excellent perspective because there are so many things that we benefit from today. And a lot of the people who created the things that we use today are still alive. But so many things in this world are the results of progress over hundreds and sometimes even thousands of years depending on what you're talking about. Yeah, absolutely. I was just in Paris, which was this magnificent, since city built on a grand scale with people obviously, hundreds of years ago with the intention of making something that would last a long time. And I think in the field you're in software development, it often seems very ephemeral, but I'm willing to bet that in 100 years, they'll still be code T. I.P. or you know, you know, you know, it's curls that will still be being used in 100 years. And so it's not quite fair to assume that what you're doing won't last. And so there should be some some acknowledgement, some recognition, some incorporation as you're working that, you know, it's possible it's just to be around for a long time. And so therefore, you know, I should treat it in that responsible way. Yeah, I think some of that comes from this, the reality that this stuff just hasn't been around for very long. It's a new building material. And so it seems as though since it hasn't been around long, it can't stay around long. Right. But that's actually not entirely true. Sure. Yeah. It hasn't been around long, but it may be around parts of it, maybe around a lot longer than anybody ever expected. Sure. And also because well, I've talked about this on the show before, but the idea that code doesn't really, it doesn't rust like a car would, right? Assuming that the logic is still intact and nobody's coming in and changing that code. Next year, that code is the same as it was last year. It's a representation of an idea more than it is anything else. And so that code staying around is certainly not only possible, but I would say probably likely, right? Because it doesn't really age. It doesn't have the same breakdown characteristics as something physical would. Yeah. And so, you know, I said 100 years, you know, it wouldn't be amazing if in a thousand years someone, you know, there were some parts that were still, you know, there were some guys still being used somewhere. It was, you know, he would be, he's long gone. No one remembers him, but, you know, that little subroutines still going. And, yeah. So, just as there are probably, you know, there are bricks and stones and some of these Walter buildings that are from the Roman times. And there is some anonymous builder who laid it. And, yeah. So, so I think the whole point here of thinking about the features is not so much that I don't know how much difference it would make actually in terms of how you craft the code, but certainly on deciding what kinds of things you want to work on. And the characteristics, I mean, all the other things about that, the assumptions about your assumptions about people's behavior and all these things are things that can be impacted by your view of the future. And one of the things that sticks out to me about, you know, as a millennial, I know people like to talk about our generation in a variety of ways. And we won't go deep in any one of those subjects, but one of the things that I've experienced with myself and people around me who are in my generation is this kind of non-chalanceness about what we're doing. And the things that we're building. And you mentioned something there that kind of alluded to this. And that is, you know, it's not really about how you're writing the code, but how much you care about it. And what you're actually investing in what you're doing on a data basis, you know, so many people, unfortunately, and that I've seen personally witnessed so many developers end up not caring they disconnect from this work that they're doing, whatever that work may be. And almost live in this kind of subconscious or not subconscious, but disconnected state, where they don't really have a profession or they don't really seem to be engaging with the world in any meaningful way, they're just kind of working through their days. And it's a very strange disconnection, kind of a tragedy of the digital state. Yeah. And they're making decisions and choices even when people think that they aren't, they still are. And, you know, all code has certain world views and assumptions built into it, even when you don't think it is. I mean, so they're just biases and points of view in everything that we do, we may not be aware of, and they may seem invisible to us, but they'll be apparent to someone in 100 years now. And so I think part of thinking about the future is a chance to reflect and examine and think about these invisible assumptions and points of view, because you can kind of look at it from afar. The point about thinking about the future is we really can't predict it. We're not really capable of predicting the future. It's an exercise to try to predict the present. It's an exercise to, it's a way to help us be present today. And so, you know, this little thought of experiment of trying to predict the future is usually going to be wrong, but it's going to illuminate what it is that we're doing today and what the assumptions we have and the biases and the perspectives. And so then, so we give us a chance to kind of tune and improve those. Yeah, yeah, it's excellent. The book that you wrote, the inevitable, talks about 12 different forces that are going to shape the next 30 years. I would say of culture in large, not just digital culture, because this idea that digital is somehow running in parallel to us is, I think it's totally inaccurate. I think we are, we're, you know, that this is really ingrained in culture, not just a separate thing. It's not like a media channel. It's very much integrated into our lives. One of the things you discuss is this idea of inefficiency or efficiency versus inefficiency. The things that we should be efficient at as humans or rather the things that we should offload to machines to allow them to be efficient at and I think you mentioned at one point, being embarrassed at some of the jobs that we once held looking back and this kind of cycle of handing off those jobs to something that is more that is better suited to do that job. Can you discuss with me the things that we should allow to be inefficient? Yeah, yeah. So, yeah, just to kind of a little example of the point you're making about being embarrassed by some of the jobs that, you know, politicians and other people are fighting over. I think we will, in 100 years from now, be embarrassed by the fact that we had any humans doing these jobs. And just today, in the New York Times, there was an editorial by a guy who was saying we need to save the jobs of cashiers and it was like, no, that's a counting money. It's just a terrible waste of a human being. You know, you don't want humans who do that because that is a job where efficiency and productivity matter and that's the kinds of jobs that bots are going to be better at. And the kinds of things that humans are good at are things that where we would where efficiency is not important and things like science, say, are inherently inefficient because you have to try things that don't work. You're going to have failed experiments. If there was a scientist who claimed to be 100 percent efficient, that scientist is not learning anything. They're just repeating the same. If every experiment they do succeeds, it's like, basically, it's impossible. And the same thing with innovation. So we all recognize innovation is the foundation, the engine driving, the new economy, the generating wealth. And that process of innovation is inherently inefficient. You're trying things that don't work. You're going to dead end, you're exploring. You are making mistakes, you're doing prototypes. All of them by definition are not 100 percent efficient. It's inherently an inefficient process. And we understand that learning is not necessarily efficient. It's inherently inefficient. And so in creativity and art are all inherently inefficient. Nobody is ranking painters by how many paintings per day they were making. That's efficiency. And so it turns out that the things that we're wasting time, so to speak, are the things that we love to do most. Human relationships are inherently inefficient. Small talk. That's not efficient. Well, it's necessary as good as what people like. And so we will gravitate to those kinds of roles as we give the bots the tasks that are necessarily productive or efficient. And that also happens to be the things that we love to do the most. As I said, we don't like doing the wrote things. We don't like being efficient in certain sense. And so any or a job will be kind of discovering these new things. And once as we discover some new tasks that is the value to other people, the more we do it, the more we kind of figure out what needs to be done, the more efficiency will become part of it. And then at that point, we give it over to the bots, to the AI, to the robots. And that's going to be a continual process. We move on to something else. We start to have questions. We do investigations and explorations. We discover some new desire we didn't know we had. And then we have a new task, a new job, which in the beginning is very vague and unclear. And then over time, it becomes clear. And at that point, it goes to the bots. So in a long, then long run, our job as humans does is to keep inventing jobs to get to the robots. Thank you so much for listening to the first part of my interview with Kevin Kelly. I hope this has challenged your thinking and excited you about the future, but also created a roadmap in front of you of a lot of work, a lot of exciting work that you and I get to engage in on a daily basis. Thank you again to Kevin for joining me on Developer Tea and make sure you subscribe if you don't want to miss out on the second part of this interview. We're going to talk more about AI. We're going to talk about optimism. All of the things that Kevin is so well known for at this point. And more of the same types of topics we've been talking about in this episode. Thank you so much for listening. Thank you again to Rollbar for sponsoring today's episode of Developer Tea. Of course, you can get started with the Bootswrap Plan for free for 90 days by going to rollbar.com slash Developer Tea. And you can get started tracking your errors in any production environment. Go and check it out rollbar.com slash Developer Tea. Thank you so much for listening and until next time, enjoy your tea.