Part Two - Bryan McCann, CTO of You.com, on AI, Engineering, Art, and Everything In Between
Published 11/11/2025
Hey everyone, welcome to today's episode of Developer Tea. This is the second part of my interview with Bryan McCann, the CTO at you.com. If you haven't listened to Part One, I'd encourage you to go back, as it provides crucial context for our continued discussion. In this episode, we dive into how you can think about relating to and integrating the massive changes that AI is bringing to your job, whether you are a software engineer, manager, director, or product professional. Bryan and I discuss his interests beyond research, including art and organizational design.
- Explore the two primary paths for developers in the long run: specializing as managers of AI tools (like a product manager with engineering insight) or striving to be better than AI at building better versions of AI itself (the "neurosurgeon" type).
- Understand why refining your intuitions about what should be built becomes increasingly crucial as automation makes execution easier.
- Examine how conceptual biases often become the bottleneck when interacting with powerful AI tools, such as focusing on very narrow tasks for a broad tool.
- Learn how to approach AI failures: treat a failed output as an opportunity to dig in and figure out why, perhaps by asking the AI to write a better prompt or identifying a fundamental missing capability that could become a great startup idea.
- Conceptualize AI as the earliest versions of magic, where the manipulation of symbols (like embeddings) allows us to extend our influence into the world in a flexible and powerful way.
- Discover principles of organizational design by studying how neural networks learn, focusing on strong information flow, skip connections, and aligning with the objective.
- Consider the idea that the next phase of human development might involve emulating AI’s learning mechanisms (rather than expecting AI to become more human-like) to unlock the next phase of humanity and continue our search for meaning.
- Hear Bryan’s final piece of advice for listeners: focus on learning and working on things you are passionate about that will have the highest possible impact.
📮 Ask a Question
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com..
📮 Join the Discord
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!.
🧡 Leave a Review
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
Hey everyone, welcome to today's episode of Developer Tea. This is the second part of my interview with Brian McCann, the CTO at You.com. If you haven't listened to the first episode, I'd encourage you to go back and listen to that. There may be parts of this that make a little more sense based on that one. But in this episode, we talk quite a bit about how you can think about, at kind of a broader level, how you can think about how to integrate or how to relate to all of the changes that are happening to your job as a result of AI. And we talk about a handful of other things as well. Brian has been on the ground floor in the research of AI, but he is not just interested in that. And we talk about some of his other interests in this episode. By the way, we mentioned this in the last. episode, You.com is hiring. They didn't ask me to say that, but I noticed recently, I think it was on LinkedIn or something, that they are indeed hiring. So if you enjoy this discussion with Brian, if you like the philosophy, if you think you would work well on a team led by somebody like Brian, go and apply. Thanks so much for listening to today's episode. Let's get straight into our interview with Brian McCann. Brian McCann, CEO, You.com So you mentioned that you're interested in both art and organizations. And these are two areas that I think headline after headline would support the idea that AI is touching these things. It's changing how we think about organization, and it's changing how we think about art. If you had to kind of... Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.ensionensionensionensionension work looks like, you know, maybe next year, maybe in five years, 10 years. And it seems like it's changing rapidly. What, what do you think is, you know, the, the, I guess, where do you think we're headed? Maybe is the right question. Yeah. I think, I think much of what people are saying is likely true. It is getting easier and easier to build, to build things that have been built before, at least. I think that it's going to be, I don't know, this might sound almost paradoxical, but I do think in some sense, if you're very application driven, the process of executing is going to get easier and easier and easier, you know, we will be able to automate more and more things away. And I think for developers specifically, there's kind of two directions in the long run. One, you started thinking of yourself primarily as what we might describe now more as like a manager of these tools, these AIs, maybe more like a product manager, a lot of engineering insight and experience. But, you can think of yourself as application driven, and you're just going to accelerate your ability to build what you like more and more. So in that sense, it also becomes more and more important to refine your intuitions about what should be built. And you can think a lot about those things. You can kind of spend your time refining how you ask those questions and how you discover what to build. And, AI can help with those things too. It's very good at that kind of research. Or you go in the second direction, which is you keep trying to be better than AI at the building itself. And you keep trying to eventually, most likely be better at making AI better. You know, if we try to project forward and AI is able to build essentially most things except, you know, AI, now we're having AI build itself. People who can still build AI better than AI can build itself are going to be almost like neurosurgeons, you know, in human society or something. It's like, wow, we don't really know what's going on in the brain sometimes, but like if you can go in and you can make an improvement, even if it's high risk, like that is extremely valuable. You're paid extremely well for it. It's a very specialized, skill, but like you can kind of be the person making all the tools better, or you can be the one on the other side using the tools. They're all getting better to actually do the things in the world. I don't think either one is inherently better. Right now, the there, you see kind of widening disparity between people who continue to do things normally and people who, you know, think on either end of the spectrum, you know, you see the top AI researchers, um, kind of, there's just this insane talent war for them because they're those neurosurgeon types. And then the people who nail the right application and have you to see it was the fastest growing company ever. Oh, now there's another fastest growing ever. Now there's another fastest growing company ever. Yeah. Um, so I do think you can kind of go in the direction of one man, one woman, one person, one company, uh, and execute on all of your ideas, uh, almost instantly and automate all the rest, or you can go in the direction of, um, making all those things better in very special ways. Those two will also have overlap, but yeah, that's how I think about developing at least right now. Yeah. That's, it's very interesting. I, in preparation for our, for our discussion, I was kind of trying to conceptualize some of the problems that I think I, at least I've observed myself, I've heard about from others and you're kind of hitting on, I think the, you know, the making, making it better, uh, side of, of this equation. I think what I've seen most often is that folks who are trying to adopt this tooling, the road is unclear. A lot of it feels like trailblazing in, in a hundred different ways. And, in a hundred different places. Um, and, and I was thinking, you know, just before we had, uh, started our discussion about how, you know, all of our, uh, kind of models of thinking our biases and, and various kind of, uh, cognitive distortions and things that we bring to the table, they can, uh, change the way these tools work for us. Because we bring those into our discussions. So as an example, you know, if, if you approach this tool with the idea that the, let's just say you're using cursor or cod code or something for the first time, and you approach it with the idea that, you know, the, the thing is that this thing does is fill in the blank. Let's say, you know, uh, it writes code for you. Uh, you give it the problem. It writes code for you. And you create this, this model in your mind of what the tool is supposed to do. And when it fails to do it the way that you, you know, imagined it could, uh, you throw your hands up and kind of walk away and say, this is the lost cause. This thing is not as smart as everybody says it is. But in fact, like what you've done is you've created this very narrow window, uh, for this otherwise very broadly, uh, powerful tool. To, to, to succeed or fail in. And I wonder, you know, how much of our ability to conceptualize these tools becomes the bottleneck for the average organization, um, to even understand what they're capable of or, or how can we, how can we think about what this tool is supposed to do? Um, or, or how, you know, the different applications I can bring it to, uh, the reason I, you know, what one other thing that came to mind about this is like, when we first, uh, you know, when there's, there's this famous kind of story about, um, when cars are first introduced into the market, you had, uh, and this is almost overdone at this point because it's so similar to, to the, you know, the anthropomorphizing, uh, of, of AI, but you, they, there were, uh, you know, the car manufacturers would put, uh, fake horses on the front of it. Um, as a way of, of encouraging adoption and allowing people to kind of create the mental model necessary, the padding necessary to grab onto this new idea. Um, and I wonder if we're missing some of that because of how rapidly this stuff is developing, you know, it's, it's hard to grasp and therefore it feels like it's kind of passing. You know, I imagine there's an engineer listening to this right now who resonates with what I was saying. Probably a lot of them resonate with, with what I'm saying about this. Like it's, it's hard to grasp. And a lot of the time it's not doing what I want it to do. And therefore I don't, I think it's hype. I don't think it's as good as, as they think it is. Yeah, no, I, I, I think you're right. This is actually been one of the most fundamental problems we've had with AI for a long time. Even even before when it was, uh, not very, and it wasn't good. It was, it was actually very, very bad. Uh, you know, when I was doing my, my research, um, one of the big ideas that I was pushing was that like, we should use language to interface with these neural networks, uh, at all. It used to be more that you designed a neural network, the way you would design like a machine that takes an input and, and has an output. And you don't get to use language to tell that machine typically, you know, uh, w what you want from that output. Um, that was, that was, that was a very foreign idea. Now we've all probably tried it. It's something like chat or quad, but that entire idea was, was odd at some point. And your insight was exactly the way I thought about it too. We need to actually remove our conceptual biases. Even, even in thinking about things like machine translation as a task and classifying sentences, uh, positive and negative. Oh, that's, that's a task, uh, answering questions based off of a Wikipedia paragraph. Oh, that's a task. These, this whole abstraction of tasks at some point appear to be, to me, to be holding us back. You know, it was like, why are we thinking about designing neural networks for these tasks? Like you would never think of them as tasks for humans. What we should be doing is teaching this normal network, how, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to, how to use language. So then we can describe in language what to do with language. And that's just so much more powerful. So that was a huge conceptual shift that we had to get through. That was very contentious at the time. And, and that continues to be, I, I think one of the main limiting factors, um, for folks interacting with AI, it's, it's, it's not, oh, I'm going to, I'm going to, um, put in some, some words and I'm going to get an hour. But, uh, it's still new enough, right? That you even see this in mature technologies like Google, like the way you type in your keywords, it will mirror things back to you. It will mirror your own quality of advice back to you. So I think it's really crucial right now to, every time you get an output that doesn't seem right or good, high quality enough, to, to try to automate that entire process as well. Ask, ask some form of AI, why? Ask it to make it better. Ask it how, ask it to help you write a better prompt to get a better answer. Um, try to take everything that doesn't work. Basically what I, I coach my engineers on is every time you're about to do something, first, see if you can get AI, to do it. And then if you, if it does great, now you just expanded your number of things you can offload to AI. And if AI fails, then dig in and try to figure out why. And at this stage, it, it may be just as valuable for you to go figure out why AI is not able to do this thing as it is for you to do it. Um, because if you go, you, you, you, you go through that process and you find, like you're saying that your conceptual biases are somehow holding you back. Well, then again, you're back in the first camp. Well, now you've solved it. AI after that, or maybe you're getting into something really fundamentally missing in AI. In which case now you have a great startup idea. Maybe you can work on that. Yeah. Yeah. And, and with, you know, uh, I, I think there's this fundamental belief and it sounds like you, you hold this belief, but correct me if you think, this is wrong. Um, the fundamental belief is that almost every problem like the, like these problems, the ones that fit in this category should be solvable by these models or something like these models or, or some kind of like structure that sits on top of them. Does that sound true? Yeah. I, maybe to slightly clarify, like, yeah, I, we, I think we both overestimate and underestimate the power of language still like, you know, it's, it's something that we use in everyday life as if it's not special. Um, at the same time, we think it is something that makes us very special, even though it might not make us special. Um, but regardless, like language and the manipulation of symbols more broadly speaking, and maybe like the way we think about it, we think about English or Spanish or something like that. Um, the manipulation of symbols, um, is still, has still does have these like almost magical effects and that it can, it can change the world. Like it does change the bits and bytes and everything in the world. So think about using these language models, uh, and all of this new AI as the earliest versions of magic, you know, and, and like, we're, we're just able to extend our reach with our own language out into influencing the world in this very, um, um, flexible way. Uh, and it is still new, but that's an incredibly valuable skill wherever you can kind of learn a new spell that actually works or dig into the alchemy of, of what's not working. Um, it's not an exact science, but it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, it's a, And in my head, I'm just thinking of embeddings and how powerful embeddings are just on their own. This idea of like, you know, 1400, or I don't know how large these array dimensional arrays are now. But being able to essentially, it's almost like categorizing everything, right? Or like finding the space that everything lives in or, you know, and anybody listening to this, I encourage you go look up what we're talking about with embeddings, it'll make a lot more sense. But, you know, there's something in that, that I think is certainly, we're barely touching, like you said, we're barely touching the surface of that. Because it is, you know, that underlying belief that I asserted earlier that these should be solvable. I think that you could just proxy that to, well, how powerful. How powerful is language? And then the more granular you get in modeling languages, then the more power you, or, and even if you were to take language out of this, just the construction of symbology, because these LLMs don't really learn one language versus another, or they don't necessarily need to, right? But the construction of symbols together and imbuing that with meaning. That to me, I think is a unique insight that I guess, you know, that's your earliest work probably is centered on that simple assertion that, and now this goes back to perhaps the philosophical angle here, which, you know, you tell me if this is true, but the interest in language seems like a spark to lead to this. Yeah, 100%. I think my, for me individually, and I think it does also just like map onto things quite well. This connection between like thinking of language is convenient, but it is more rigorously true to think about it as like sequences of symbols. And, you know, in this world, as I mentioned earlier, sequences of amino acids kind of describes it. Right. So, you know, there's some language, there's some meaningful language there, there's mathematics itself is like, is a language. And one of the ideas that really got me to buy into this, you mentioned embeddings and vector spaces. I think these are incredibly valuable tools for us. I mean, that's indisputable. Very valuable in AI, but also in many other areas of engineering. Packing things into small spaces is basically like a very important tool for us conceptually too. And we can categorize, like you said, but then we also have to be wary of our categorizations, right? And allow them to change sometimes when we're wrong, if like have this updated belief system. And, yeah, it's been really interesting. It's been really interesting to see with this project, like how far language can go, I guess. Yeah. How far this can go, this whole system of just saying, well, if I ever, if I am ever confronted with a problem, something I wanted to do, whatever we wanted. And we just thought, if I could give you the right symbol to come next. And it was always the right one. Then what is it? What could you do with that? Yeah. What isn't possible? Like if I was like, okay, how would we build a nuclear reactor? Or how do I build new space technology? And if I could actually give you like the right next symbol, always, that was true. It does seem almost infinitely powerful in that sense. It was like the most powerful thing we could do. So if we can't get there, we're always approximating it. But I think there's still something in my mind that's very drawn to that. And it feels very special. So I do think these things are going to go a lot further. We've already seen them coming. Yeah. Yeah, I agree. And I think this actually, I think this is a really good question. This actually ends up turning into, you know, one of my often used phrases at work as an engineering manager. I can't remember who first coined it, but the map is not the territory. Well, it seems that these LLMs are becoming more, just better and better maps. You know, and, you know, you can kind of zoom in on the map a little bit with these things. It is a pretty amazing leap, leap forward for sure. I had a similar conversation about, about neural networks. And I'm curious if you've ever kind of been able to put language to this, and we can probably close out with this. The idea that you have multiple layers of these, of these nodes, and I'm talking about a very simple neural network that, you know, anybody could, could conceptualize. And that you go to, let's say a second or a third layer. And with that first layer, it's, it's. It's fairly easy to assign a human language to what those particular nodes might represent trait wise. Right. Like it makes sense that, you know, if, if you were able to identify this particular trait in an image, then, you know, it's, it's a square, right. You find some, some way of, of modeling that for that first trade. But as you begin to go to like layer two and layer three, these are. I'm not sure what the technical is. Things like hyper traits or something, combined traits. It's much harder to assign an explicit label to that. But somehow, like you were saying earlier, there is meaning in there. That we don't really even understand. It's, it's difficult to understand. Have you, have you kind of. Have you dived into any of this or dove into any of this kind of. You know, with, with other things beyond just NLP. Things like neural nets. I'm curious what your thoughts are on that. Yeah, I think it's. Yeah, you mentioned these, these states with the language that we always kind of use to describe them with these. Like these hidden layers, these hidden states. Or kind of also related language or like latent variables. Um. These things that you. Can't. Can't necessarily access, but they're there. And. I think for me, when I look at those, like, you know, you can try to design all sorts of tools to introspect on those, those models or like, like dig into them and try to isolate which. Uh, functional pathways map to which things are like understandable for us. Um, It's possible. You know, sometimes you can find individual neurons and like layer three that. You know, only fire when the word dog is in the sentence. You're like, huh? Okay. Weird. Um, you know, sometimes it feels like a curiosity. Sometimes it feels like an insight. Um, I think. The, so my, my. The way that I mostly think about these things is, is, uh. That's where. All the concepts and, uh, all the rules. Get really fuzzy. Um, so it kind of goes to this process of being very accessible to us. Right. And working with on our categorizations and our concepts and our biases. And then. Somewhere in between. You almost have to. Break that stuff down. Right. And digest it. And then, and then you build it back up on top of that. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. look at it, it's not, it's not going to make sense to us sometimes until we develop our language further and our categorization is better. But yeah, that's just kind of, that's where you throw things in and it's all like, it's all mixing together. And I think some of our biases can be stripped out. And I think it's important, the most important principles in that part of the process are to have really, really good information flow. So I, maybe this is one for the engineering managers and directors, like I like to think of designing organizations or the way that I would design neural networks. You need really good information flow, even if, you know, two nodes in the network look like they're far apart when you draw them on paper, you still need strong connections between them. You know, if you add too many layers, then signal is hard to get all the way down from like the top layers to the bottom layers or the bottom layers to the top layers. Sometimes we use something called like skip connections to tie, to kind of skip over some of the layers and connect the ones at the top to the ones at the bottom so information can flow more freely. Sometimes we, you know, we use this idea of, you know, we use this idea of, you know, we use this idea of, you know, we use this idea of like attention. So like nodes pay attention to other nodes in different ways. And like that intention flow is really important. And ultimately like the objective that the, that the neural network is trained on is extremely important. And neural networks, as you train them, converge to one solution, which just suggests that at the start, you know, they're mostly random numbers. And then based on the objective and the data that you give them, they start to solidify. Or codify kind of the principles from that data that would allow them to succeed on the objective, at least as much as possible. And the way that happens is one layer, the bottom layers converge first, they stop changing. And then the ones above that, and the ones above that, and the ones kind of closest to the objective, usually are changing last. And then they're also the easiest to change. And then the ones below that, and the ones above that, and the ones above that, and the ones that are the most likely to change, you know, if the objective changes. So I think it's even if you aren't an expert in neural networks, maybe like looking at some of the research about like how they learn. I think that actually gives us really good principles for ourselves individually and how we can learn. Like our core beliefs are much harder to change, you know, like the lower levels of the neural network that are closer to those embeddings. The things in the middle, you know, are easier to change. The things like, oh, related to the neural network. Yeah. And then the ones below that are closer to my current objective, maybe easier to change. And that's true for organizations too. So I really think there's something we can learn from how AI learns that's probably applicable to ourselves and organizations. So it's very, very neat to try that out and see if you can use that to help your organizations learn through this entire process. And all this AI stuff is happening. Yeah. Yeah. And I think that's really interesting because I think most people listening to this, the insight there for me is we have thought about these models as trying to emulate us. Or at least that is the like that's the common perception is that, oh, a neural net is so powerful because we figured out some way that we imagine that the brain, you know, these neurons work in a similar way. You know, you have activation and all that stuff. Right. But what you're saying is like, well, actually, we could design a neural net that we want to emulate at a more meta level. And of course, we can't decide how our neurons activate. But if we were to use this as a mental model for a larger process, we actually get better results out of that. Which I think is backwards from what most people imagine the insight to be. From AI, oh, we need to work on this thing and make it more like us. Maybe the better thing to do here is to adjust something to be more like it. Yep. I think that's right. Very interesting. I think it's very natural for us as AI, you know, came for our games and it came for our language. Maybe it's coming for some of our jobs. Sometimes that's related to our identities, our sense of purpose, maybe our sense of meaning. People get these things from their work and the people that they're working with. And yeah, maybe this whole process of moving the goalposts and trying to like run away from AI and like protect something, oh, it's our language now, it's just in our intelligence. And maybe there's something else that we can hold out for. Maybe that process is kind of playing out and it's almost over. And said, just like you said, we shouldn't be. We shouldn't be trying to necessarily make AI more like us. In this case, like we should be being more like AI. Maybe that's the answer to unlocking the next phase of whatever humanity is. And the next stage in our search for meaning. Yeah, there's so much more that we could dive into here, Brian. Thank you so much for your time today. I want to give you a chance to. Let folks know anything that you are. You would like for them to go and check out, first of all. But then secondly, I have one final question for you. So let's start with that, though. Where can people find you? What can they learn more about you're doing? What you're doing? Yeah. Yeah. So, you know, I'm primarily working on you dot com right now. Why are you dot com? You can find me. On, you know, social media, probably speaking, you know, follow my. LinkedIn for work stuff and a lot of thoughts around things like this. I'm not super active on. X. And if you want to check out my paintings and the art side, you know, I'm on Instagram too. But. My website also has, has some thoughts and ways to reach out to me. If you'd like to talk more. I do a lot of conversations like this. A lot of fun essays from. 10 years ago. When I feel like a lot of this stuff was getting started. But you. Dot com is, is the main project right now. And. You can keep an eye out for. Some things. I'll be doing with Ted AI coming up in the next couple months. We're going to be doing something there on the philosophy of AI. That might be a fun conversation. If. Folks have liked this one. Yeah. Excellent. Very cool. Last question for you. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. in and that you're passionate about that you believe are going to have the most impact because I think those are the areas where you'll naturally be able to learn the most and as AI automates more and more things I would love to see your intuition develop around the highest possible impact you could have and then see what some of our listeners end up doing as they can create entire companies and and it gets easier and easier to change the world I hope you're thinking about the ideals that the values that you'd like to see in the world and you know yeah be curious to see what they all build fantastic Brian thank you so much for joining me thank you too I appreciate it thanks so much for listening to today's episode of developer tea the second part of my interview with Brian McCann if you only listen to the the uh the second part I'd encourage you to go back and listen to the first part as well it's a great discussion uh and and kind of follow the story of the first part of the second part of the second part of the second part of the second part of the second part of the second part of the second part of the second part of the second part um with the style of discussion in the second part so uh thank you again to Brian for joining me on today's episode and to you.com as a point of clarity you.com did not pay for any part of this interview uh it was just a discussion between Brian and I thank you so much for listening and until next time enjoy your tea you