« All Episodes

Interview w/ Daniel Pink (Part 1)

Published 8/20/2018

Today we are thrilled to have Daniel Pink on the show, Daniel has four New York Time Best Sellers and Daniel has a wide variety of experience including being a chief speechwriter for Al Gore in the 90s, and has one of the 10 most watched talks on TED talks.

In part 1 of today's interview, we're going to talk about Daniel's experiences and writings and dive right into what Daniel wishes more people would ask him about.

Get in touch

If you have questions about today's episode, want to start a conversation about today's topic or just want to let us know if you found this episode valuable I encourage you to join the conversation or start your own on our community platform Spectrum.chat/specfm/developer-tea

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.

Transcript (Generated by OpenAI Whisper)
Today, I'm thrilled to have on the show Daniel Pink. Daniel has four New York Times bestsellers. The most recent is called When, The Scientific Secrets of Perfect Timing. And he also has to sell as human the surprising truth about moving others. Drive, this is one we've talked about on the show before, drive the surprising truth about what motivates us and a whole new mind. Why write-brainers will rule the future. In fact, two of those books were actually number one New York Times bestsellers. Daniel has a large variety of experiences. For example, he was the chief speech writer for Al Gore in the 90s, but he's also been the co-host of a TV show that aired on National Geographic Channel. He's been on pretty much every major news channel that you can think of. And if that's not enough to lend him credibility, he has one of the 10 most watch TED talks of all time with more than 20 million views. Daniel has so much insight to offer. I encourage you to go follow him on Twitter. He tweets out some really excellent research. A lot of the things that he tweets are actually small excerpts or many summaries of this research. You can follow him at at Daniel Pink on Twitter. Let's go ahead and get into the interview with Daniel Pink. Daniel, thank you so much for joining me on today's episode. Jonathan, thanks for having me. It's good to be with you. So I know that developers who are listening to this right now have almost certainly heard of your books. If not from the general media landscape, then they've definitely heard it from this show. We've talked about plenty of the ideas that you've discussed and researched or at least compiled research on in your various books. I want to take a moment, take a step back because I know you do a lot of interviews. And I'd love to ask you a question that hopefully you haven't been asked in a recent interview. And that is what do you wish more people would ask you about? Huh. What do I wish more people would ask me about? I don't know, actually. It could be one of those situations where you don't know what you're looking for until someone offers it up. So there isn't something. It's not as if I sit here in this interview with you, Jonathan, saying, oh man, I hope he asks me my financial advice. I hope he asks me what I think is the best strategy for the Washington Nationals or anything like that. I find that in a lot of these conversations, I do think that a lot of these conversations ultimately come back to some pretty fundamental things among them essentially one's views on human nature and one's views on values. And so I think that that would be, especially the values question, is one that I think we don't talk about explicitly all that often, but is a really rich topic and in many ways underpins all the other topics. So what do you think we get wrong when we talk about values? Where do you feel like that conversation needs to go more often? Well, I mean, I don't think I think we don't often don't surface the values enough. And I think that at some level there's a values conflict and organizations, institutions, and even in cultures. So for instance, if you're like me, where do you live, Jonathan, where do you based San Fran? I'm actually based in Chattanooga, Tennessee. Okay, excellent example then. So I think that a lot of, and I'm showing my bias right there because I figure, oh, you're talking about developers probably in Northern California. And so if you look at something like authority, okay, the value of authority, there are a lot of people I live on the East Coast, they're in Washington, DC, big urban area. There are a lot of people in big urban areas who think that everybody has a view of authority like saying, oh, no, authority is something to be resisted. Is something to resist authority is a negative. And in fact, there's another competing moral value that says that authority is something to be respected. But when I'm doing something wrong when, that I, me, Daniel, are doing something wrong when my 15 year old son's friends come over and they call me Dan rather than Mr. Pink. That is, you know, and there's a value conflict. There's a conflict of values there. And then also in organizations too, you know, there's certain kind of fundamental, you know, and I just want to, you know, I mean, my value, like I don't hold authority that high as a moral value, but that doesn't mean that I'm right or that there's a single notion of morality there. There are other people from whom authority is a much higher value. And it doesn't mean that they're right either, but it means that there's a conflict among us. And we have to, we can't assume that all of us share the same underlying set of values. The other thing, and I think it's adjacent to this is, is what's your view of human nature? And that guy's a lot of our decisions about what we do in organizations. If your view of human nature is that people are generally lazy, shipless, and need to be controlled, that leads you down one path if you're a manager. If your view of human nature is that, hey, most people want to do good work or willing to try or conscientious, that leads you down another path. And so a lot of the things that we superficially talk, a lot of things, I don't want to say it's superficial in the negative sense, but on the surface, when we talk about things like performance, when we talk about things like how organizations run, when we talk about work patterns and workflows and structures of organizations, deep down inside of them are presumptions about human nature and about values. Yeah, and this is really interesting because I think if you take that values discussion to kind of the edges of the map, then you find the places where we kind of all agree, maybe this is where human nature begins and where values end, where those lines are drawn a little bit harder, like for example, we can all generally agree that killing each other is pretty bad. It's not really a good thing, right? It's not really a discussion of values. And so how can we find those lines? And is it perhaps that some people believe those lines are in different locations on the map? I think that's part of it. So if you talk about, this has become philosophy for tea, not Developer Tea here, but if you think about killing, for instance, and think about, you know, for instance, in the Obama administration in particular, there were a lot of military actions that were done by drones on occupying not, you know, unmanned area. Yeah, I was trying to avoid the word unmanned, you know, sort of, you know, on human nature. An occupied aircraft that would, you know, be deployed and would kill somebody. And there were some who would say, oh my god, that's out. You know, there are some people who believe that any kind of killing, whether it's in a war or not, is unjustified in a world. There are other people who would say, well, those are the bad guys. Those people are trying to kill us. They've done an incredible harm. We need to eradicate them. There are some people out there on something like the death penalty, where you say, well, death penalty is an appropriately appropriate punishment for the most grievous crime. Are there other people saying no? Under no circumstances should the state take the life of somebody on things like abortion? You have, you know, is an abortion the killing of a human being? And there are people who say, well, yeah, it is because life begins at conception. There are other people who say, well, of course not, because that embryo is not a human being. And so even then, there are these divides at the moral level that we sometimes don't talk about. Now, those things about killing and death penalty and abortion are things that we don't talk about when we typically aren't germane to a lot of organizations. But some of these other underlying moral issues are, especially when, as I said, when it comes to things like tolerance, when it comes to things like ambiguity, authority, when it comes to things even like order and cleanliness, those are big deals. And organizations can be talking about these things in more useful ways. Would you agree with us? Totally. So how can we have better conversations around values? Maybe that's a good question that we don't necessarily have an answer to. You know, I think that there is, you know, in general, a lot of what happens in organizations, the interactions and certainly those conversations in some sense are performed rather than lived. You know, we're all playing certain, now it's sociologists to, now we're performing some of these roles and there's sometimes not authentic conversations. I think authentic conversations inevitably surface some of these deeper issues. I think that the authentic conversations keep those things behind the locked door, which diminishes their usefulness. And you see things like, I mean, on a mundane example, you see some of the conversations that people have in performance reviews. I mean, that's a completely performative encounter. Somebody's playing the role of boss, somebody's playing the role of employee. It's not the kind of conversation that people would have when they're out having a beer with someone they're comfortable with. Right. Yeah, it's incredibly tropey. It's true, especially in tech. If you were to go into a startup in San Francisco, you can identify the person who doesn't want you to talk to them. They're the ones that is wearing the headphones and, you know, it's certainly a stereotype, right? And a lot of developers play into this because there's some kind of identity that is forming there. And perhaps it's a personal identity, but also this organizational identity, the idea of having a ping pong table. Hopefully, you know, we're slowly moving past that. And I think that's something that is changing in the tech industry. We're moving a little bit past the, you know, oh, unlimited pay time off. And while everybody knows that that's actually not true. And so if you were really to take unlimited time off, you wouldn't even work there anymore, right? And there's these identity things that we're building and maybe breaking. And there's kind of this post, post startup culture that's emerging from that as well. Yeah, probably, probably. I mean, again, I think that you raised another good point about identity identity. Identity matters much more than a lot of conventional, I think business and political thinking takes account of. Yeah, and I also believe that the values discussion that you brought up earlier plays heavily into that, right? Yeah, yeah. And so we're trying to form and perhaps it's because, you know, we have difficulty expressing our own values and something that I've created this show to help kind of evolve the listener's ability to express their own values. Because if you can't express your own value, then perhaps someone else can help you express it, right? Right. And that's the values of our neighbors where we adopt the values of the company that we were. And this is how we have these kind of homogenous groups of people that cluster together because they're geographically or, you know, some other kind of collection of people, they start to adopt each other's values. And this is very normal, isn't it? Yeah, totally. I mean, we are social creatures, we human beings. So we are deeply influenced by who's around us when we decide how to behave. We look for cues about from people in our midst. And so, you know, this is where, you know, complicated, we are complicated creatures. And again, a lot of times I think that the way that we think about business, the way the thing about organizations doesn't account for that kind of complexity, doesn't account for the multi-dimension, multi-dimensionality of many human beings. Yeah, yeah, absolutely. A lot of things. Yeah. And this drives, no pun intended, back to some of your original kind of, or perhaps your most well-known works about autonomy, mastery, purpose. You know, how can we, how tightly would you say those values and, you know, interact with purpose, for example? Yeah, well, I mean, I think if you look at the research on the values of autonomy, mastery and purpose, I actually think they're pretty deep and fairly universal that is in some ways, I think that they can cut across certain kinds of values. So if you look at something like autonomy, autonomy doesn't mean, you know, you know, screw the man and I'm going to go off on my own like a cowboy in the American West. Right. And that means, it means basically self-direction, having some sovereignty over yourself. And that can be in the service of something collective as it would be in places like, in East Asian places, or it can be in the service of something purely individualistic as it is here in the United States, often in the United States. But the idea that people want to be self-determined, they want to have, they want to have some sovereignty over themselves, I think that's universal. And it's perfectly compatible with a sense of collective desire or collective purpose. I think the same thing is true with something like mastery, which is our desire to get better at stuff. I think that cuts across all, I think that cuts across values. I mean, you just think about, you know, and the same thing is true with purpose. That is, I think people want to know why they're doing something rather than just how, rather than just how to do it. Yet everybody's purpose isn't necessarily going to align, but I think everybody does have an animating sense of purpose that helps drive what they do. So if you think about the difference in value, I think that these are attribute, this is actually more like human nature rather than values. That's right. I think that autonomy, master, and purpose that are desire to get better at something that matters, you know, why things are happening to have some sovereignty over what we do and how we do it. I think that's, I think that is human nature. And the reason I say that is not because I'm an unoptimistic necessarily, but because, I mean, in part because I'm a father or in because I've had kids and I've seen kids. And I define you to find me a two year old or four year old who's not autonomous, you self-directed, curious, wanting to learn and grow and wanting to know why things are happening. I think that's our nature. Absolutely. I think that events and institutions can spire just enough about that nature, but I think deep, but deep down that I think that is human nature. That's not all of human nature, but that is fundamentally part of human nature. So what would be the motivation in snuffing that out? That's something that I think is kind of the implicit conversation here is like the second order effect. What is required for autonomy is a deep level, perhaps, of trust, right? Is it a lack of trust? Is that why we want to shut this down? Like a part of it, because human beings also, well, you got it. That's the keyword that you just mentioned, control. Human beings also have an impulse to control other people. We like to dial down uncertainty. We like to exercise our authority over people. And so, if you just look at human civilization in some ways, human civilization has always been this kind of, you know, in different pockets of time and around the planet has always been these battles between control and autonomy. One group is trying to control another group. The other group says, you can't control me. I'm autonomous. So I'm going to fight back. And so I think that that's why, you know, and also there's a certain, if you think about something like autonomy and if you think about something like even, you know, even mastery and purpose, they're not purely efficient, okay? So if you're just trying to make the trains run on time, who cares whether people are getting better or anything, they just want them to execute getting the trains running on time, at least in the short run. That's all you care about. That's another reason why some of these contingent rewards, these, what I call, if then rewards. If you did this, then you get that. It's why they have an effect in the short run. And one of the reasons why they, and if they're affecting the long short run is one of the reasons why they endure. So, you know, there is this, there is this battle here between humans wanting to control all of the people, there was a time in the workforce where efficiency was by far the highest virtue. And in a sense, you wanted to build organizations that went against the grain of human nature because it was more efficient. But now I think in a world where people are doing more creative, complex tasks, you want people to have a sense of self-direction because they do those kinds of tasks better. You want people to learn and grow because that's how they improve at those kinds of tasks. You want people to know why they're doing what they're doing because they end up doing the task better. And so, I think the interesting thing happening right now, if you look at 50 years of behavioral science about what really motivates people, is that in many cases, our organizations were designed to go against the grain of human nature because that was more efficient. But now, so, for instance, you think about an assembly line. An assembly line to my mind is not consistent with what we know about human nature. It basically says, we're going to suppress human nature in the interest, you know, perhaps understandably, perhaps properly, we're going to suppress human nature in the name of this other thing that we value, which is efficiency and production. Now I think what's happening is that you can run organizations ever more that go with the grain of human nature rather than against the grain of human nature, not necessarily for nice, nice, touchy, feeling reasons, but because that's a better way to run organizations. Right. Yeah, because it's actually more efficient for the creative types of tasks and it's no longer a labor economy, it is more of a knowledge economy. Right. It's no, I mean, again, the skills have gone from routine to non-routine, you know, and so for routine, you know, so, for instance, this is one reason. So, let's think, let's take, you know, example, like stuffing envelopes, okay? Stuffing envelopes, controlling mechanisms can be effective. So if you say to people, if you want to let an envelope stuff, pay people, you know, for envelope, you'll get a lot more envelope stuff than you will if you pay them a flat rate. There's no question about it. You know, you might get more, you might get more envelope stuff in at least in the short term, certainly in the short term, is if you have monitoring cameras to make sure that everybody is stuffing their envelopes. If you have people walking back and forth to make sure that people are filling their under envelope stuffing quota, but when it comes to things that require judgment, discernment, creativity, people don't do their best work under conditions of control. We're not creative when we're always being measured. We're not creative when somebody is monitoring us. We're not created, we're less creative when there is this, when we have to do it within a scaffolding of contingent punishment and contingent rewards. Yeah, and this is definitely true for development. If you pay your developers, you know, per line of code, then your code is going to get long code. Yeah, exactly. You won't get good code. You'll get long code. Absolutely. And because in it's the same thing, the discussion on gamifying, you know, the rats, the rats, tails, I'm sure you've heard this, I believe it was in London. The experiment was done where London as a city or some city, I'm probably getting the details wrong, but they paid per the tail of a rat. And so what ended up happening is people outside of the city started farming rats and they would bring the tails in after they farm the rats. And so yeah, the system will be gamified. And it's not necessarily because there's nefarious, you know, people, there's not rat farmers all over in the tech industry necessarily. But because what you reward is what you're going to optimize for. Right. Exactly. Thank you so much for listening to today's episode of Developer Tea, the first part of my interview with Daniel Pink. Daniel was so gracious to come on the show. And I hope that you enjoyed this first part and I hope you will subscribe in whatever podcasting app you're using now. So you don't miss out on the second part of this interview. You may notice that we didn't have a sponsor for this episode and this was on purpose. We actually have an ad free version of Developer Tea. And if you want to listen to more episodes like this one that are uninterrupted by ads, then I encourage you to go and download Breaker. This is an app for iOS right now and it's coming to Android soon. And with Breaker, you can subscribe to the ad free version of Developer Teathrough Breaker's premium service upstream. This allows you to support the show directly with a $4.99 a month subscription. And in return, you get to listen to all of the episodes of Developer Tea without any ads. Thank you so much for listening to today's episode. And until next time, enjoy your tea.