Part One - Bryan McCann, CTO of You.com, on AI, Engineering, Art, and Everything In Between
Published 11/4/2025
Hey everyone and welcome to today's episode of Developer Tea. It's been quite a while since I've had a guest on the show. Today, I'm joined by Bryan McCann, CTO at you.com. We dive into a wide-ranging discussion, exploring the philosophical origins of his career—from studying meaning and language to working in very early AI research. This discussion is less advice-heavy and more focused on kind of theory and discussion. I hope this is insightful for you and helpful as you crystallize your own philosophies on these subjects.
- Explore the philosophical journey that led Bryan McCann from being a philosophy major interested in meaning to pioneering early AI research. Bryan views his current work as an extension of those original philosophical questions.
- Discover how Bryan shifted from hitting a dead end in "armchair philosophy" to using computational tools to study language and try to make machines that could create meaning.
- Understand why Bryan believes that meaning, in the sense he originally sought it, is an innately human thing, tied to purpose and the narratives we use to shape our sense of reality.
- Discuss the profound realization that AI breakthroughs might be akin to discovering electricity, suggesting we are tapping into a fundamental framework of meaning or connection that has always existed.
- Examine the concept of super intelligence and the "flywheel effect," where AI accelerates research and development, building better versions of itself and potentially surpassing the classic anthropomorphic vision of machine intelligence.
- Explore Bryan’s other interests, including organizations, people, and art, which he sees as continuing the uniquely human search for meaning.
- Consider the idea that humanity's constant need to differentiate itself from machines may simply be a mechanism for survival, enabling our continued dominance.
📮 Ask a Question
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com..
📮 Join the Discord
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!.
🧡 Leave a Review
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
Hey everyone, and welcome to today's episode of Developer Tea. In today's episode, it's been quite a while since we had a guest, and today we are joined by Brian McCann. Brian is the CTO at You.com. We have an excellent conversation. It's quite a wide-ranging discussion. We dive into some philosophy. We dive into some conversation about AI. Brian has been on the ground floor working in very early research on AI, and I hope you enjoy this discussion. It's a little bit of a different kind of approach, not as advice-heavy in today's episode, a little bit more kind of theory and just a discussion between two people who are thinking a lot about these topics on a regular basis in our free time and at work. So. Hopefully, this is insightful for you and helpful for you to think about, perhaps kind of crystallize your own philosophies on these subjects. Thank you so much for listening. Let's get straight into our interview with Brian. Brian, welcome to the show. Hi, Jonathan. Thank you. Thanks for having me. Absolutely. It's been quite a while since I've had a guest on the show, quite honestly. And we've been doing a lot of the kind of like the solo episodes in the past couple of years, mostly because my career has kind of ramped up in terms of what is demanded of my time. So I'm excited to have you as my first guest back because you kind of, well, I'm going to let you kind of get this information. For somebody who has no idea who you are or what you've done in the past, what are the first couple of things you tell them about the work that you do? Great question. I'm honored to be one of the first guests back. I've listened to some of the more recent episodes too. Very interesting stuff. Very interesting to see how you're thinking about some of these things and some of the concepts you're introducing. So I really appreciate it. And maybe with that, it's a good segue to, I usually open with telling people that I was a philosophy. I was a philosophy major. And I think on the outside, it's obvious that I'm doing CTO stuff at an AI startup. It's a little less obvious, but if you looked online, you could easily find out that I did AI research for several years at one of the industry labs before starting this company, u.com. Um, but before all of that, I was really interested. I was interested in philosophy and I actually still see everything that I'm doing as an extension of some of those original questions. So I was really interested in meaning, still interested in meaning. And in some sense, I only got into research to study those questions just happened to be with computational tools. Making neural networks was the best way to do it. Oh, super interesting. Okay. So for, for somebody who's. Who heard that and, and doesn't, I guess, intuitively connect the dots between going from philosophy to, you know, uh, calculus for your, for, for writing a neural net or something. Can, can you help me understand, you know, there was a moment at some point in that journey as a philosophy major. And, and even before that, the spark to get you into philosophy, but then another spark, it sounds like. To get you interested. In this applied version of philosophy to AI. Can you tell me about that moment? Yeah, I guess we can go back many years now. Uh, I was talking with a friend about this recently when, when was the real first moment. And the first thing that could actually come to mind was, uh, was when I was in a religious class. And like fifth or sixth grade, something like this. And there were questions that were being asked that, uh, I couldn't quite make sense of, you know, and I didn't have good answers for, and it, it stood out against all of the other subjects that were more about learning and doing, applying your knowledge. Um, but I could not get answers to some of those questions. And. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. off. high school to become very interested in these ideas of meaning broadly speaking. We're having, I hope, what will be a very meaningful conversation. It may be meaningful to a listener to hear our thoughts. Most of those conversations around meaning get, I would say, reduced or pushed into conversations and questions about language, at least in the academic philosophy world. Because we can hear language, we can kind of see language when we write it, we can study that. It's a little bit more tangible and perceptible than the more general questions of what is meaning. And so the philosophy of language gets into, you know, syntax and semantics and all of these other things. I studied that for several years while I was studying computer science on the side for fun. Much in the same way I was when I was in fifth or sixth grade studying a lot of subjects, but then I had these big meaty questions in my mind that were unsolvable until my third year in undergrad. When I felt like... I had exhausted the philosophy of language approach. I felt like I'd hit a dead end. And there was only so much armchair philosophy one could do. And I got this idea to maybe use the computational tools that I was learning how to use on the side for fun to study language as well and try to make machines that could make meaning. Yeah. Yeah. Thinking that if I... Yeah. If I could do that, I must learn something about what meaning is along the way. Hmm. And those questions captured your interest, in my understanding, because they were kind of fundamentally unanswerable or because at the time you believed that, you know, that they would be the most difficult questions or perhaps like a longer journey to answering them? Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. answer that you could arrive at one day? At the time, I definitely thought that making this switch and approaching it in these different ways would at least give me insights about it. I don't know about final answers to the world's deepest questions. But I definitely thought that I would have better answers, or could get to better answers. When I started, and that changed in my first couple years of doing research, I think I changed to think that it was less about finding those answers, that no matter how many neural networks I built, and no matter how much data I gave them and how much better they got, but when you look inside the machine, you can't pinpoint any meaning, or point to something and say, this is where the meaning is. So my expectations around that changed actually quite a bit. Yeah, it's interesting because one of the goals, I have three things that we try to elucidate or help people find on this show. One of them is purpose. It's quite a loaded term. It's so individual. Do you feel that this was you tracking to a purpose, or would you say that it was more tracking towards an interest or following some kind of intuitive, leading thing? Which resonates, I guess, better for you? Well, I think they were, they were, it was both at different times. You know, I think when I first started, when I was starting, when I was starting, when I was starting, first started doing it, it was mostly out of purpose. And then when I started doing the research further, then it became more interest. And when I looked in the machines and didn't see that I would ever find an answer to the purpose questions, perhaps it became more about interest. But for me to keep doing it, I then kind of rederived another purpose from it and trying to make more unified approaches to AI. And I developed other purposes, I guess, within the field and new questions came up after the old questions. I had answers to even if they were indirect answers. You know, I think I... I learned to disentangle what we might perceive as meaningful, perhaps in this case, like coming from machines, and from what is actually meaning. And I at least made the categorization that meaning in the sense that I was looking for it is just an innately human thing. It is that thing that is more tied to purpose, it is more tied to the narratives that may exist behind the evolution of evolution, and may exist behind evolution, and may exist behind but also shape our sense of reality and shape who we are. And even if a sequence of symbols is generated by a machine, it's really us perceiving that in some sense and it registering in our minds and having that resonance kind of strike throughout our linguistic community that gives it some sense of meaning. And that meaning is just close to the other forms of meaning that we experience, but it's not the same. So all three are actually different. It's like it's making connections with some other thing that we've either experienced or that we've thought about. I talk to my children about this often. And we talk about various rules and cultural norms. So we live in Tennessee and there are rules, of course, in every school system. We have a particular set of rules in their school system. One of the rules has something to do with, I think, they can't color their hair, what the Tennessee, our area calls an unnatural color, which I take a little bit of personal frustration with. But yeah. And so I was talking to my children about this and trying to give some kind of best interpretation. And what I've tried to teach them is that different things mean different things to people. Sorry, different things resonate differently with different people. And so for you, this might just mean that your hair is red, or I guess red is what they might consider a natural color. But anyway, you know, there's some kind of signifying thing. This is symbolic or it triggers some kind of other process for this person. And, you know, we show respect to the communities that we are a part of by understanding what things mean to other people. It's a deeply human ability to have empathy and to understand that this means something to that person that it doesn't necessarily mean to me. But me discarding it would be an affront to them in the same way that I wouldn't want to wear something that's offensive or say something that's offensive to my neighbor unless I'm, you know, I'm being offensive on purpose, right? Like, and generally that's not a value that we want to teach our children. So I guess all that to say, there's no inherent meaning. It's just, you know, the scientific level, it's light waves that are, you know, a particular frequency. How does that change whether my child can learn in school? That's not really the point, right? It's if we were to boil everything down to its pure scientific, you know, description, then all of those symbols you're talking about become the same thing. It's just light wave, you know, transmission or whatever. In that case, it's just symbols in a particular form. But we, I guess, imbue that experience with meaning in the sense of what you, like where your research led you. I think that's a reasonable way to phrase it. Yeah. I think we kind of, yeah, we light it up, you know, and make it actually meaningful, at least in the sense that, you know, I constrain my sense of what meaning is now. It's interesting you mentioned, you know, connection and you mentioned maybe this reduction, and you mentioned, you know, connection, and you mentioned, maybe this reduction, and stick view down to symbols or light waves. And I think this is something that I was thinking a lot about in parallel with that because, well, there's one example I've been kind of playing with, a classic philosophical example, right? Like, do we have free will or not? Is everything deterministic? And on the one hand, we can go into that question, you can have, you know, we can have lots of debates. There are kind of endless arguments on either side about which one is actually true or not true. But, you know, even if I believe on some level that everything is determined and mechanistic, it's almost, I think it's impossible for me to actually live that way. And so, you know, I still wake up and I do things and I experience my reality as if I have some agency in the world. And maybe that's, maybe one could transcend that. But I think there's something similar with these questions around, you know, symbols versus meaning. We can't really, you can't live, you can't completely dissolve away these constructs that we have. These constructs of meaning, which are, I think, fundamentally based on this actual framework of connection. Connection, again, kind of more broadly speaking, like maybe you and I are connecting through this conversation. Maybe we're connecting to your audience. Maybe, you know, two things are connected with a phone charger. There are different forms of connection, but connection's definitely, you know, a really, a real thing. And it has a, has some sort of relationship to different kinds of meaning as well. You see this in the, kind of in the details of the math of neural networks that they try to measure distance between even some different symbols, different words. But in some cases, it's not so much the distance that matters. It just matters that things are connected. And that might be in, in some ways, a more kind of fruitful path in thinking about, I don't know, what is, what is reality? Is it, is it the, the light waves or is it more our experience of it? I think these are, these are some of the things that I've more or less started to throw up my hands, you know, and think of them as very interesting philosophical questions that I do hint at some further scientific inquiry as well. But I actually see all the recent stuff in AI as evidence that we're just, we're just tapping into it. It's like we just built the first machines that can harness the power of electricity. You know, that those electromagnetic fields were always out there. We just didn't have a way to know that they were, and we didn't have a way to, you know, you know, you know, you know, use them and harness them. I think we're, there's some sort of framework of, of meaning or connection out there, resonance in the biological neural networks and humans and animals and all of these things that that's real. But we are somehow creating tools that can measure it, harness it, tap into it, trough from it. And you've seen what happened with electricity. So I, I do expect big changes. Big changes based on that in the world. Yeah. And fairly rapid ones that it seems that, and this is not my area of study. So, so, you know, I'm going to say words that probably mean very different things to you than they might mean to me. So it's kind of a naive take. I love that, you know, as we go through here, but, you know, it's, it is in some way. And I think this is probably felt by, by, by the researchers working with this. One of the first bits of, of research that helps us do the research itself using the thing that we just discovered. Right. And similar to how tool building or like in the industrial revolution, you have the standardization of, of, you know, bolts and various sizes and things like that. And because of that, you could now turn around and build, new assembly lines. You could build new tools in order to continue building things more effectively. And so it's a bit like a flywheel effect. Do you think that that is having a major impact on, you know, how this research is even carried out in the first place? Absolutely. Yeah. I think in many ways, that's the next big goal for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, for, and Cloud Code and others have definitely changed development. And they're already helping us build. The nature of the building blocks themselves, I think that's a big goal for folks right now as well, to try to get not just tools for us to build better bridges, but for the AI to build better versions of itself more and more automatically and get that flywheel going where it accelerates on its own at the rates that it can handle that humans can't really handle. And that would be the path towards some sort of super intelligence, even beyond artificial. Artificial general intelligence, which is arguable whether we have achieved it or not. But super intelligence being something that is obviously better at so many things than all of humanity ever could be. Right, right. Yeah, because our classic belief has been that humans are kind of the peak of intelligence. Or like if you were to rewind to, you know, 1960s or something, or even just look at like the Jetsons, right? Yeah. The kind of vision that culturally we held was that, you know, machines might one day be like humans. And this kind of anthropomorphic, you know, vision that we had of artificial intelligence. I think, you know, I imagine that most people listening to this now find that ridiculous. And certainly researchers, I believe, would because machines are very quickly surpassed humans in special intelligence, right? Just hold a calculator in your hand and you can recognize quickly that special intelligence is something that machines are incredibly good at. And so then it stands to reason, I think, that machines may become good at many things much better than humans were. Like you're saying, super intelligence seems like the aspirational goal to strive towards. Or, you know, this is kind of setting aside questions of, you know, Right. moral clarity, I suppose, or like ethical clarity. But assuming that progress was kind of directly aligned with what we care about as a species, then, you know, the next step or the goal would be this progressive move towards super intelligence, setting aside that anthropomorphic vision entirely. Would you agree with that? Yeah, I like the idea of setting the anthropomorphic vision aside. For now, a lot of the themes of my research as I was making large language models and things as well was trying to frame it as a tool as a creative, a new creative tool for people, you know, not replacing, not anthropomorphizing, it's a tool that allows you to maybe go from zero to one much faster. But then you can always. go, you know, to some place new after that. It doesn't need to be an entirely complete substitute or replacement. But I do think, you know, but it's the anthropomorphization, like it plays a role too, right? It generates a lot of excitement. So just practically speaking, it's this, it's this odd fascination we have with, you know, I don't know if you want to call it playing God or what. You know, we. We. We. We want to make something in our image. And it does drive a lot of interest, drives a lot of capital in that direction. It's it's it is endlessly fascinating, but it's also. Very easy for us to adapt very quickly, I think, to new forms of it until we see the gaps and something doesn't quite feel right. And you're kind of instantly impressed. But then the goalposts moves. And whereas with the other things. The maybe more specialist types of intelligence, it's just it's just obvious that. These things can ingest and understand in some definition or the word understanding, maybe fundamentally different type of understanding that us. Sequences of amino acids, you know, proteins, biology, chemistry, physics, and probably make connections, real connections that are meaningful. And then. And then. And then. And then. And then. And then. And then. And then. And then. And then. And then. And then. And then. And then. And then. At the end of evolution, evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution evolution knowledge garnering that investment sometimes requires you know playing to uh the anthropomorphic narrative i don't i don't personally need you know uh robots to be better in my life than the humans at the what i what i so consider the human things i also distinguish that from the fact that i think they could be you know like yeah i've got a best friend i love him uh i don't think it's impossible for machines to be better as a best friend than me one day you know if they're embodied robots and like they can probably be more understanding more compassion or they can at least seem that way right so i don't think uh we should i don't think we've found the thing that makes us special yet right and in that search the only way to find out if there is anything is to kind of keep going and see what isn't possible to replicate and replace um but for now i don't think we found it we have we've had a lot of suggestions intelligence being one of them but uh i don't i don't know i'm not sure pushing on the edge of that right yeah yeah yeah it's um so so i think i want to get around to questions about skeptics here in just a few minutes i'm curious i i see two different kinds of skeptics um but before i ask those questions i want to kind of take a step back and um you know this is probably it sounds like this is something you you think about talk about often but what do you wish in these kinds of discussions interviews etc what do you wish interviewers would ask you more about what is this is usually something i hold till towards the end of the interview but um you know what's what's the the the the the the the the the the topic or or an area that you wish you could talk more about yeah great question i think uh you know the things that i'm thinking about i i suppose outside the the normal topics um i mean this is these are these are my jam you know i like thinking about these things all day i think a lot about um how we might like we we have unresolved questions seemingly in in physics and how we understand the universe there uh we i i love you know thinking and talking about organizations and people um part of why i left research was because i wanted to build companies and teams and uh if i wasn't going you know if i couldn't find the the meaning itself in the machines after making them um and that meaning was kind of reserved for humans then like i like working with humans and um making things meaningful so talking about like organizations and how they're changing um thinking a lot about that and art as well you know and the role that art still plays for us and i think that's a really important thing to think about um even as ai may or may not be like invading part of that world i think it's something really important to me um and they all play into uh what i kind of call like a search for meaning um whereas maybe you know maybe uh maybe that's what we're really good at you know we're we're very good at continuing that search and in some sense that looks like a lot of the work that we're doing is like moving the goalposts but in some sense it just looks like reconceptualizing ourselves and finding a new narrative finding a new story um to keep going yeah and to motivate ourselves i heard this described as spiral learning are you familiar with this term where you come back around to something and you refine it each time you touch it and uh you know i think that's probably like a more um it's kind of the same things you're saying with moving the goalposts but i think as we uncover you know learning or or defining meaning is a is an emergent thing it's not necessarily or i guess you know one one hypothesis would be that meaning is not just a fixed location that we are searching as if it was like a you know on a two-dimensional map once you get to these coordinates you found it uh i think it it continues to emerge and i think that's a good um through a variety of of manner it's so and and is also not necessarily just deterministic for uh for all of all of people right like every person uh i think you you could imagine that it's going to be different for each person and also different for that one person over time um so that search becomes i guess becomes the uh unearthing rather than the searching uh you're finding things that are meaningful um but i i i do like the idea that you're that you mentioned earlier about the fact that uh we haven't quite found something that is truly fundamentally what makes us a different thing uh that intelligence we thought that was it we thought that certain types of relational you know empathy being able to have empathy for others is a different thing another thing that seems like it could be it uh but it's it's still easily challenged we can observe that in what why do you think we even need that you think it's like is that like a survival thing that we just inherently feel like we need to be differentiated and have moat from other other things the only thing that makes sense to me is that we have some intuition we believe it's intuition maybe we're just describing a way uh of you know a reason that we feel different but that feeling different might just be like you're saying you know if we didn't feel different perhaps we wouldn't be as motivated to continue surviving uh if we just became part of you know the larger ecosystem or something then maybe maybe uh the fact that we are the most dominant species is was dependent on us feeling special right um and so that's how we got to where we are and now it's become a bigger search but it's not just a matter of how we're going to survive but in fact it was just a mechanism of survival yeah no i i think that's probably that's my that's my sense as well once again i'd like to thank brian mckinnon for joining me on the show um this interview and none of the interviews that we've done in the past uh was a sponsored interview but i did want to mention in case anyone is looking for a role um that you.com is indeed hiring you and i'm going to be doing a lot of research on that and i'm going to be doing a lot of right now uh they didn't ask me to say that uh but it's worth going and checking out their job work thanks so much for listening to today's episode i hope you enjoyed this uh this discussion with brian mckinnon this is part one of two parts um so we we normally split our interviews like this in half because we want to make sure that we continue to deliver on uh releasing episodes that are uh that are certainly under an hour we try to target much shorter but these interviews are going to be much shorter so i'm going to be doing a lot of research on these interviews tend to run a little bit longer so you will you will hear the second part uh of our interview with brian mckinnon in the next episode of developer tea thanks so much for listening and until next time enjoy your tea