« All Episodes

AI-Era Employability and Job Security for Software Engineers - Mental Models for Finding a Competitive Advantage Without Selling Out

Published 2/18/2026

I've been delaying this episode for a long time because the topic is genuinely difficult and, for many of us, scary. AI is threatening not just to our livelihood, but to our sense of self-worth as creators.

In this episode, I don't offer false guarantees about job security. Instead, I frame the problem through the lens of microeconomics and rational incentives to help you understand how to remain employable. We discuss why you must separate your ego from your current skill set and how to position yourself not as a competitor to AI, but as a force multiplier.

The Hard Truth: I explain why the "abstinence" approach—hoping the industry rejects AI or that it turns out to be a bubble—is a high-risk gamble that is unlikely to succeed.

Ego vs. Employability: We discuss the difficult mental shift required to disconnect your self-worth from the act of writing code manually, allowing you to adopt new tools without feeling like you are losing your identity.

The Microeconomics of Your Job: Understand the cold reality that a rational market only pays you if you generate more value than you cost; if AI can do the same task with less risk or cost, the market will choose AI.

The Non-Zero Sum Game: Learn why the economy isn't a fixed pie. The goal isn't just to survive, but to recognize that the combination of Human + AI can generate more total value than either can alone.

Multiplicative Value: I challenge you to stop thinking about linear skill acquisition and start thinking like a manager: how can you use AI to multiply your output and become indispensable?

Accepting Atrophy: We confront the reality that your core coding skills may degrade over time as you rely on AI, and why accepting this trade-off might be necessary for your career survival.

🙏 Today's Episode is Brought To you by:

If you are building an application that needs real-time search results—especially if you are working with LLMs—you know that stale data is a problem. SerpApi is the live web search API for your application.

• Get real-time search results fast, directly in your app as JSON.

• Bridge the gap for LLMs that are locked to a training date.

• Trusted by companies like NVIDIA, Adobe, and Shopify. Get started with a free tier to build your full integration before you commit. Go to serpapi.com

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

📮 Join the

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community today!

🧡 Leave a Review

If you're enjoying the show and want to support the content, head over to iTunes and leave a review!

Transcript (Generated by OpenAI Whisper)

I've been putting off doing this episode for a long time for a very simple reason. It's one of the hardest episodes to cover. It is a difficult topic because it's scary. It is threatening to our livelihood. It's threatening to our sense of worth, our sense of self. And a lot of you are experiencing that fear. And the reason I've been delaying this episode for so long is because I don't have a great answer for you. I don't have the answer for how you can avoid losing your job. Because of AI. But I do hope that in this episode, we can frame this problem and give you some tools for ways to think about how your career can continue advancing and potentially take advantage of the unique shape that the market is taking because of AI. My name is Jonathan Cottrell. My goal on the show is to help Driven. Developers like you find clarity, perspective, and purpose in their careers. And man, this is a big topic. And it's changing rapidly. We're going to try to make this episode as evergreen as we can. So we're not going to talk about really specific things about what AI is good at and what it's bad at, for example, because we're going to end up being wrong shortly, probably, because what AI is able to do is changing all the time. And if you've been following this topic, which if you're listening to this podcast, you probably are following this topic pretty closely, then you know that the state of the art is advancing rapidly and that our jobs are different today than they were even just a few months ago. Right? Agentic coding, tools like Quad Code, Codex, even Gemini has a similar tool out now. These are tools that are making the process of writing code is very different than it used to be. The process of designing software is different. The process of ideating, of writing tests, the entire kind of workflow that I grew up learning as a software engineer has changed. Now, that was true 20 years ago as well. And it's just... I just want to talk about what's kind of different about this. 20 years ago, if you didn't continue advancing in your skills, if you didn't continue staying close to what was being released, you know, as a front-end engineer, if you didn't pay attention to the new, you know, working group specs that were emerging, then you would probably get left behind. At the very least, you wouldn't be on the cutting edge of what's possible, you know, again, in front. Engineering in the browser, right? If you didn't pay attention to new APIs being released for mobile applications, then other developers would have an edge on you. They would know how to do things that you couldn't do. And so it's always been true that this career is rapidly advancing and perhaps much more than some other careers, not all other careers. There are certainly other careers where this... This is true. But if you contrast it to something like, you know, being a lawyer, the rate of change for what it means to practice law is very different than the rate of change for what it means to build software, especially in, you know, fast-moving markets, in startup land, et cetera, right? And so, of course, we've always dealt with that. We've always dealt with, you know, the goalposts, the target moving, and us having to keep up. And that's a core part. In fact, on this very podcast over 10 years ago, we talked about learning being a fundamental skill, that you as a software engineer, you have to develop the skill of learning. You can't learn enough, get a degree, and then coast for the rest of your career on just gaining experience. That's not going to be enough. You're going to have to... Intentionally step outside of your normal work, right? This isn't just, oh, I'm going to learn some things or pick up some tricks along the way. You're going to have to intentionally step outside of your normal work and learn new tools, learn new techniques, learn what's emerging, you know, learn what's on the cutting edge. That's how you stay relevant in your career. Of course, that has always been true. But now that has taken a different shape because a lot of the things that you are learning through adopting agentic coding patterns, for example, are fundamentally different in their kind of interaction modes, right? We're going to talk a little bit more about that. But I really want to focus in on developing a clear mental model, a clear understanding of this economic trade-off. Because that's really the... Underlying question here. When you talk about whether a person will have a job or not, it is worth thinking about, at least at the micro level, the microeconomics and the governing kind of forces of what keeps somebody employed versus what would cause them to lose that job, right? So we're going to set aside discussions about performance and we're going to set aside discussions about... You know, whether or not you are a good engineer. We're going to assume that if you're listening to this podcast, this is, you know, hopefully this is the case for most people here, that you are a capable engineer, that you're continuing to learn that, you know, that's not the problem or the consideration being made. All right. So I want to talk about kind of two major aspects of what makes you employable and this problem, you know, that you're going to have to deal with. You're going to encounter probably at some point in your career now of how you can remain relevant or how you can remain, you know, on that leading edge as AI continues to make an impact on the skills that matter to be a good engineer, to be in good standing, to continue, again, growing in your career. We've been doing this career growth accelerator. Really, this episode kind of goes beyond career growth accelerator and into this larger, larger topic of how do you maintain a competitive edge against a bunch of computers that theoretically could do the thing that you previously, you know, were employable because it was a unique skill that you had, right? So this is the kind of scary picture here is that one day you're going to wake up and the skills that you built over many years, many painful experiences, probably, you know, reading books. Practicing, writing tons of lines of code, maybe attending classes, attending, you know, online courses, all of that suddenly becomes less valuable, right? And now those things become like commodities. And instead of paying a human to do those things, companies pay, you know, an AI company to churn out that stuff using tokens, right? Okay. So there's two sides to this discussion today that I want to dive into. And really, we can't get to everything here. Before we get into it, just a kind of an overall statement here. Nobody, not me, not any other, you know, podcaster, YouTuber, no AI co-founder, no AI. Nobody can tell you for complete certainty how you can keep your job, all right? That is part of what it means to operate in a market. You're never guaranteed. You're never guaranteed that you're going to have a job. You're never guaranteed that your skills are going to be valuable. You're never guaranteed that somebody is going to be willing to give you money to do a thing. I can't offer you that certainty. Instead, what we want to do is, again, look at the economic forces that a rational actor would make or economic decisions that a rational actor would make. And we can talk a little bit about, you know, how those decisions get made. But first, I want to discuss something that I think is very important for us to recognize. And it's the not so rational side of this argument. When we look at this as a threat, it threatens a lot of our sense of self and very much begins to threaten our ego. So what do I mean by this? If you were to come to the table and say, my primary intent, my goal is to remain employable, right? If that's 100% true, and we're going to take ego out of the equation, then you would be able to quickly discard your connection, your attachment to the skills that you previously had that made you employable. In other words, all of the time that you spent learning how to code, you would be able to discount that immediately, right? If you could totally detach from your ego. If you could avoid a sense of, you know, defensiveness over your own skill set. If your only goal, if all you were optimizing for was to remain employable, then a lot of the objections to kind of going all in on AI, right? Because there is a part of this discussion that, you know, software engineers may feel hesitant in. You know, pushing a lot of their time and effort towards learning and taking advantage of these new tools. If you can avoid the ego, you know, complexity there, and instead say, I'm going to optimize for this goal, then the choice to move, you know, to pick up new skills becomes a lot simpler. All right. So in other words, some of what we choose is to protect our sense of self. Some of the pain or the fear that we feel is that we're losing something that we enjoy, that we care about, that we wanted to do, that we imagine it. We're maybe even grieving, you know, that we thought that we would be able to continue building our career on these same skills. And now we're having to pivot, right? There's a lot of reasons why our ego might be. Captured in this. There's also potential for your values to be wrapped up in this discussion. If you have, you know, ethical concerns, for example, if you have, you know, concerns about the kind of financial aspect of what this means for, you know, a bunch of other engineers, for example, let's say that, let's say that we're, you know, that our hypothesis is that a ton of engineers are going to lose their jobs. And therefore. You know, out of, out of a sense of personal morals or, or, you know, your personal ethical framework, you feel like you have to abstain from participating in something that furthers that, right? So there's certainly are potential, you know, objections that could be made on a values perspective, right? So when we come to this discussion, it makes sense to be clear headed about what you want. If you want to maintain your employability, what I want you to do, and this is the most challenging thing I'm going to ask you to do in this episode. I want you to try to try to move towards a more rational expectation of what the industry will do. Okay. And we're going to talk about that. We're going to talk about the economics in the second half of the episode to help you gain a picture or at least a model of thinking that will help you predict what the industry will do. But it's very unlikely that we're going to, uh, that, that the whole industry is going to say, you know what? Nevermind. These tools have proven to be useful for a bunch of things, but we think that, you know, our ego. And our identity as engineers and all of these, this time that we've spent building these skills, we think that's more worth it. Protecting that is more worth it. And so we're going to, uh, you know, even though this is a very useful, um, you know, and by useful, I mean, utilitarian wise, it's very useful, uh, uh, you know, set of tools. We're going to shut it all down. Right. And, uh, setting aside the feasibility of that being nearly impossible. At this point, because of how much research and how diffuse this, uh, is, it's not concentrated on one company that has proprietary, uh, uh, you know, control over this technology. Setting aside that, let's say that all of humanity agreed that we're going to treat this, you know, similarly to nuclear weapons or something. And we're going to have a bunch of shared policy that makes it illegal to use it. If that were the case, uh, then perhaps we can maintain a position that says, I'm going to abstain from this because I don't think it's useful. There's another potential route, uh, that abstinence from using AI, um, you know, you, you could imagine succeeding if you believe that there is a massive level of hype. In other words, uh, that all of this is, is a big bubble that somehow, uh, is going to be a big bubble. Uh, it isn't as useful as people are saying it is, uh, that it turns out that, you know, it's a bunch of smoke in mirrors or there's, there's some kind of, uh, you know, revelation that will happen at some point where we say, oh, what were we thinking? This isn't even that valuable. We're going to go back to our old way of doing things, or at least we're going to scale back significantly how much we expected this to grow. We're no longer going to expect it to be able to do. You know, these things that we, uh, we're trying to push it to do. All right. So those are kind of two, two, uh, uh, kind of coexisting. If you wanted to continue succeeding in your career and also abstain from adopting AI, uh, those two pathways are the only two that really I can imagine existing. Uh, one being that, you know, there's, there's some false, likely false belief that all of society is going to, uh, say, you know what? No, nevermind. We're going to move away. And the second path would be, as it turns out, it's not as useful, even though society doesn't reject it. Uh, you know, the market ends up rejecting it because it tends, it turns out to not be as valuable as we thought it would be. All right. Both of those seem incredibly unlikely to me. And so I think it's important for you as a software engineer and as a human being to determine where you stand. If it is true that the industry will continue adopting AI, if it is true that this is only going to ramp up at the very least, it's going to stay where it is, but it's likely that it's going to ramp up. What is your personal position on whether that is an acceptable, uh, you know, skillset for you to invest in? Are you willing to, uh, and take the steps that are necessary for you to take in order to maintain employability? If that is the future of the industry, that is the critical question for you as an engineer. If you can't answer that question, then turn this episode off and spend some time journaling, go on a nature walk, do whatever it is. Um, you know, I say that in jest, but truly get in touch with. Your inner self, get in touch with, um, your, your values. Try to imagine a future where this is, uh, uh, you know, a part of your skillset, a part of your tool kit that you've adopted it, that you're kind of on board, right? Because it's very unlikely that a, an abstinent position is going to succeed for a very long time. Right now I'm, I'm being intentionally vague and non-prescriptive about what kinds of tools you'd be using about, uh, you know, to what level for what kind of purpose, because that is ever evolving and we're going to continue seeing different use cases, different patterns of use. But if you want to succeed in your career, moving forward, you have to make a decision, right? Uh, whether this industry. If it continues to adopt AI, which I believe is very likely, whether that's an acceptable future for you, can you be on board with that or not? And if not, it's worth confronting that reality, right? It's worth confronting whether or not you personally can move that direction. I told you this was a hard episode. This is probably one of the hardest episodes that I've ever done of the show because it is such a. Nuance topic. It involves so much of our internal process. It involves so much about our ego and about, uh, our values and, and, uh, the macro and microeconomics of these things. But hopefully with this part of the way, we're going to talk about the economics, especially the microeconomics of how you could maintain employability right after we talk about today's sponsor. Today's episode is sponsored by SERP API. If you're building an application that needs real time search data, whether that's an AI agent or an SEO tool, a price tracker, anything else that needs to know what's happening on the web right now, SERP API is the web search API that handles it for you. You make an API call and you get back clean JSON. In fact, you get back, uh, your own. Selected JSON. You don't have to get all the fields back. You can limit the fields that you get back. They deal with things like proxies, CAPTCHA, all the scraping that you would otherwise have to do. You don't have to think about that. You could just use SERP API. They support dozens of search engines, platforms. They're fast and they've been doing it long enough. The companies like Nvidia, Adobe, and Shopify rely on SERP API already. There's a free tier to get started and you can try it before you commit to anything. And it's enough that you can actually build. Something real to test it out. Go and check it out. Head over to SERP API.com. That's S E R P a p i.com. Thanks again to SERP API for sponsoring today's episode of developer team. Let's talk about one of my favorite subjects as it relates to AI economics. We're not going to talk about the macro economics mostly because I'm not. Really qualified to talk about that. Macro economics for this particular discussion would be largely focused on, you know, if you were to have, um, everyone, uh, in the industry replaced, what would that do to things like very large budgets? Uh, right. Or, or, uh, you know, the supply and demand of much larger scale things. Would it collapse, you know, entire economies? And that kind of thing. And those are worthwhile discussions, but again, I'm not. Um, you know, especially qualified. To have those discussions. And I think there's too much noise for me to be able to speak to that with any credibility. So I'm going to avoid the macroeconomic discussion, not because I don't think it's important, but because I think you can probably get better insight elsewhere rather than this podcast. Instead, the microeconomics I do think are worth talking about. Okay. If you're thinking about the microeconomics of any employability discussion, the first thing you should be thinking about is why would a company choose to give me resources? The fundamental reality of any capitalistic, uh, kind of endeavor is that they don't want to give you money. If you're working for a company, when I say want. Just to be clear, I'm not talking about the humans involved. I'm talking about, uh, the, the kind of incentive, the business incentive is to maximize profit. Right? So if the company can theoretically eliminate jobs, uh, the, the company's not thinking about that. Uh, the incentive structure is not thinking about that as eliminating people's employment. They're thinking about that as a choice to improve. So they're thinking about that as a choice to improve. So they're thinking about that as a choice to improve. So they're thinking about that as a choice to improve. So they're thinking about that as a choice to improve. Right? They, again, being just the incentive system, the system is designed to maximize profits. It's an efficiency, uh, move. So then the question you should be asking yourself is then why do they give anybody money at all? And hopefully, uh, you know, this, this is a very basic economics perspective to be clear. I don't have a degree in this subject. Um, you know, I haven't done a lot of study. Uh, outside of my own personal study on this, but the only dis, the only reason, uh, in a, in a completely rational system that, uh, you know, somebody who is incentivized to not give you money because of profits would give you money is because their choice to give you money is in exchange for something that you do that enables them to gain more profit than they otherwise would have. In other words, their spend on your salary is returning them more net, more margin after the fact than if they hadn't paid you in the first place. Right? In other words, you're making the company more money than you're costing them. This is the basics of microeconomics. Now, of course it can get complicated with things like cost centers. In other words, the company is going to pay you, uh, money, even though you're not directly making a profit because you're enabling a profit center somewhere else. Right? Very simple example here is software engineering. Most of our work is not directly enabling profit. Uh, it is indirect and the sales cycle is actually where we see that profit realized. All right. So, uh, but setting those things aside, we want to think more abstractly about this problem. Instead of trying to nitpick the abstract idea here is that the incentive that, uh, that a given, uh, actor has in a capitalistic system. In other words, uh, you know, they're trying to maximize their profits by selling something in a market. The incentive that they have is to not give you any money at all, unless giving you money makes them more money. Right? Um, now again, this is, this is setting aside things. Like, uh, you know, uh, value and mission statements that organizations have. And is assuming that, uh, those things are met, for example, right. In either case, uh, that they're met so that the decision factors come down to how do we meet those mission statements and make the most profit. Okay. So if that's the case, then this decision about AI would be the same. In theory, the decision would be, can we pay the same amount of money or less? Can we reduce our risk? Can we somehow make it more efficient to use artificial intelligence to do what this person otherwise would be doing? Right? So there's a, uh, a distinct decision. If you were looking at the absolute micro scale, there's a distinct decision about which thing is better at doing some particular job. If the human is better, are they better enough to justify the differential in their cost? Again, this is going to assume that the cost to pay a human is going to be more than the cost to offload to AI. At the same time, these are all assumptions that have to be made in order for this kind of job loss scenario to play out. If it turns out that organizations are incurring an enormous amount of risk, despite a low upfront cost, what would that look like? low upfront costs. What would that look like? It would look like, you know, AI delivering code that turns out to, you know, one out of a hundred times have catastrophically bad data leak type bugs. And one out of a hundred companies are now leaking data and causing massive lawsuits. And, you know, then the upfront cost of the AI integration or, you know, investment is low, probably relatively low, but the long-term cost because of the risk curve here is very high, right? You would do some kind of utility function and find out what is the cost of that major data leak. And then, you know, the one out of a hundred, multiply it by that, that's probably our utility cost. What is the risk that we take on by allowing AI to take on the responsibilities that this human once had? All right. So when we think about the economics of this, we think about value generation. And it's important to recognize that it's not always an either or scenario. This is because the, one of the kind of fundamental features of the AI integration is that it's not zero sum. In other words, there's not a fixed amount of value that needs to be generated in a capitalistic market. The value continues to grow. Again, this is all theoretical, but the value can continue to grow. In other words, if a company could get more value out of, let's say a human plus an AI, they may choose to do so. There's no rule that says that they can only generate a certain amount of value and then they're capped. This is actually the fundamental kind of governing factor for why a capitalistic market continues to grow over the long run, right? It's because we continue to build new things. And there's not really a, a, a natural limit per se on how many things we can build. There is, there is a natural limit, but we haven't hit one. Okay. So if, if we have this system set up so that there is no, you know, it's not zero sum, that means that one of the options on the list is what we just said, that it's, you know, you could have, there's, there's kind of three options. If you were to consider one AI, one human, there's, there's three total options then, right? I guess there's four total options. There's zero, right? So we're going to quit. We're no longer going to generate anything or into this particular department is no longer needed. That's an option. There's just the human, there's just the AI, and then there's human plus AI. A rational, uh, capitalistic business decision-making, algorithm would look at this and try to determine which of those is going to generate the most return on investment over the long run. And so it's worthwhile for you to try to identify, right? So this is, if this is the economic picture, how can you identify ways to position yourself such that the total value generated is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, is, human, human, to the business. And so there's some things that you can think about here. And this starts to get into some, some like skill stacking or skill portfolio thoughts. For example, what are you good at? What do you enjoy doing? What are you not so good at? Right? These are things that hopefully you already have a pretty good idea of. And then cross-reference those with what AI is good at. Again, we're going to be very careful to not list that here because that could change as soon as next month, right? So it's very likely that humans will always have some kind of thing that we are better at than AI. I say very likely because I'm not sure, but I believe that we probably will always have an edge. For something. Okay. But perhaps the most under discussed piece is not us versus AI, but what are we better at with AI? This is multiplicative value rather than competing linear value. You want to start thinking about, and again, this is, this is assuming that you've already set aside the discussion in the first half of this episode, that you are willing to kind of change your skills, that you're willing to set aside or retire skills that are no longer competitive. Right? Okay. So what are you going to be better at if you use AI to make you better? If you combine yourself, combine your skills with the things that AI is going to be better at, better at than you are, what is the multiplicative value there? Because now you're, you're no longer competing with, but you're competing, uh, as a multiplier of value of that particular thing. If you're a manager, you know, this is true. You know, this is true. If you're a manager, if you're a good manager, especially because you're not a profit center, if you're a manager, you are a multiplier of value for your direct reports. Right? So if you were to think about how can I make my direct reports, you know, indispensable to the organization, if you were to shift that thinking or, or apply the same mental model or same type of thinking to how you can make AI a more valuable thing to the organization, now you're becoming, uh, you know, again, a force multiplier, but you're becoming more indispensable because, again, the option of only AI, well, that's no longer going to be as valuable to the organization, right? If you can make multiplicative value happen, then you begin to create a little bit more indispensability in your role. So it's also important to recognize that this is all changing, and we've said it a hundred times already, but as you're doing the skill portfolio evaluation, think about where this is going, right? What will you become better at or worse at over time? It makes sense to pay attention to the studies on this because it is changing so quickly. One of the things that studies are looking at right now is whether your coding skills will degrade over time if you start using AI more often for coding. There's some signals saying that your coding skills may atrophy over time. Is that valuable to you? Is that part of that ego thing that we were talking about earlier? Do you want to still continue coding by hitting keystrokes? That's up for you to decide. It's yet to be determined whether that particular skill is going to be critical for you to be able to be an engineer. It's shocking that we're even saying that. But over time, we will learn how much those coding skills maintain their critical hold. And it's very possible that at the very least, they'll become less important in the future. Again, if you're a manager, you already know this. Your skills that made you successful in your early career, you probably, one, have atrophied in those skills, and two, they're not as critical. They're not as critical anymore for your current role, even though the organization has found you valuable enough to maintain your employment. Why? Because your total effect, your economic impact on the organization, in a rational world at least, right? Your economic impact on the organization outweighs whatever it would have been had you just continued using those skills. So if we are, willing to be flexible with what kinds of skills we're willing to adopt, if we are willing to set aside our attachment, if we are willing to move forward with AI despite any of our misgivings or concerns on the ethical or moral fronts, then I do believe, I'm an optimist, in this particular way, I do believe that we can have long and fulfilling careers and that our potential with AI can continue to grow. I believe that you still have the opportunity to walk this career growth pathway. And I hope that you will take the time to really dig in with the concepts here. Of course, there are other sources that you should go and listen to. This was, sort of from a specific perspective, kind of an American capitalistic perspective in terms of the economic forces and the incentives. And there are other forces and incentives out there that are worth looking at. But I'm an optimist. I believe that you can continue thinking about your career, not as bulletproof, nobody should ever think about it as bulletproof, but as employable. I do think the vast majority of software engineers can maintain their jobs, maintain their employability, as long as they can look at this from the perspective of incentives and determining how to create multiplicative value in the long run. Thanks so much for listening to today's episode of Developer Tea. Thank you again to today's sponsor, SERP API. It is your solution for a web search API. You just send it a query, it'll send you back JSON. You don't have to worry about parsing anything. You don't have to worry about scraping or it captures. All that stuff is taken care of across dozens of different platforms. So if you're interested, you can go to SERP API, and you can go to SERP API, and you can go to SERP API, and you can go to SERP API. Go and check it out. S-E-R-P-A-P-I.com. That's SERP API. Thank you again to SERP API for sponsoring today's episode. If you enjoyed this episode, there's so many places you can find us now. We are active on YouTube now. We have the podcast in pretty much any podcasting provider. That includes Spotify, includes Apple Podcasts. And of course, you can always email me at developertea at gmail.com if you have questions. And finally, please leave reviews. And all these places, subscribe in all these places, or just choose one, whatever your preferred format is. Subscribing is the number one way to help us continue doing this show. Thank you so much for listening. And until next time, enjoy your tea.