Metamodeling is creating a model of models. Confused yet? When we use models, we often assume they are complete. But what characteristics could make all of our models better? That's the concept of metamodeling. We talk about creating metamodels and using steering feedback to derive value from one model to another.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Transcript (Generated by OpenAI Whisper)
It's the middle of November, which means most of us are probably already thinking about the beginning of the end of the year. And in fact, if you're like me, you're already thinking about next year, well into next year. When we think about a new year, as we've said so many times on this show, we often think about change. And this is for good reason, even though there is nothing technically different about turning over a new leaf on January 1st. We have learned through quite a bit of research that these kinds of chapter changes at a societal level make a big difference. They make a big difference in our, for example, willingness to keep a commitment. And maybe you're listening to this episode and it's not close to the beginning of the year, but it is probably close to the beginning of something, the beginning of a new week, or maybe it's close to your birthday, or a new quarter in the year. Whatever it is that you are kind of preparing for the beginning of, it makes sense to take a moment and take stock of the kinds of change that you want to make in your own life. Now this doesn't necessarily have to mean that you have a specific goal, a specific thing that you want to stop doing or a specific thing that you want to start doing. Those are certainly things that you can put in this category of changes you'd like to make. But maybe if you're like most software engineers, maybe you have an area that you want to improve in and you're not really sure how you want to improve. That's what we're going to be focusing on in today's episode. We're about two minutes in, but my name is Jonathan Cutrell, you're listening to Developer Tea. My goal on this show is to help driven developers like you find clarity, perspective, and purpose in their careers. And I want to focus on two aspects of improving without really knowing exactly how you will improve. You might have some general goals, but overall, you're not exactly sure how you're going to improve. I'm going to give you two kind of skills or practices that you can implement to improve in almost any area. And we're going to dive straight into the first one. It's called meta modeling, meta modeling. When you think about the work that you do as a software engineer, you probably have different categories for that work. If you were to have to look at your calendar and block off the kinds of activities that you're doing, right? You might have internal meetings, external meetings, you might have code review, code production, and then even below these or kind of subcategories to these, you might have new code, and then you might have refactoring, for example, right? You might have pair programming versus asynchronous code review. All of these categories are labels that you're putting on a classification of activity. In other words, those labels, they point to not just one specific set of steps, but a general kind of process, a way of doing a particular kind of consistent thing. So during code review, your rough process might be to start out by reading the code and then going back and making comments and areas that you feel are higher risk or maybe there's some kind of refactoring that you want to do. Whatever that set of steps is, roughly speaking, you're going to go through that same set of steps each time you perform a review. Most people don't have this formalized. This is kind of a feature of the human mind. You're able to create these categories and have some rough heuristics based on those categories. As an example, I can say drive to the store. In your mind, you don't necessarily hear all the specific steps, but if you were to go and execute that process because we have these abstract pointers, these labels heuristics, you could go and get in a car most likely and drive to a store. There's a lot of micro steps along the way, each at kind of differing levels of granularity. For example, getting in the car, sitting in the correct seat, putting on your seatbelt, and then the actual driving process following the traffic laws, knowing where you are, both in terms of where you're going on the road, but also knowing the location that you're in so that you follow the proper traffic procedures for your location. There's a lot of information that gets kind of abstracted away under a single label of driving. These labels, these categories that we create, these are models. These are models of some series of steps or a process or even a way of thinking. And what often happens with these models is that once we've executed them successfully once or twice, we very rarely revisit the model itself. The result is that a lot of the things that we could be improving on at a high-level average point that is at that model level, we often miss out on. So the model doesn't improve, therefore our consistent behaviors don't improve. And this is where meta modeling comes in. When you think about meta modeling, what you're thinking about is how cohesive are your models? What are the characteristics of those models? For example, you might ask the question, does this model consider what happens in the future? This is an abstract question that you can almost ask about any process-oriented model that you have. So this is what meta modeling does. It gives you a guideline for how to judge your models, not any specific model, but the models more generally. Now, what's so powerful about this is that you can start to change what you focus on by changing the models at this level, having a meta model that you judge all of your other models on or that you create or design your other process models from. If you look at each of these categorical models now and you apply the question, does this model take into account the future? Does it consider how things will change in the next three to six months or three to six years? How are we accounting for that in this model? That kind of meta modeling question can lead you to improve each of your actual implemented models. So how can we improve our meta models through some kind of steering system? We've all heard the term feedback loop. This is not a new concept to you as an engineer, certainly. And feedback is critical to our constant improvement, particularly to ourselves, not just external feedback given to us from other people. That's not the kind of feedback that I want to talk about in today's episode. Instead, I want to focus on this idea of setting up feedback loops and steering that meta modeling that we were talking about in the first half of the episode. So here's the kind of basic question I want you to ask. Do you have a way of evaluating how effective your processes are? Or a better way of putting this, do you have a way of measuring the effectiveness of the various activities that you take apart in? Now, I want to make sure this is incredibly clear because this is where a lot of people will get confused. A very good software process could be in place and producing a very bad product. For today's episode, we're not talking about the quality of the product itself. You can have excellent models in place producing a very poor product or interestingly enough, you could have poor practices. And the product is good enough to overcome the shortcomings of those practices, of those models that you've implemented. So an easy exercise for you to do this is to look at the categorical activities that you take apart in and you can do this for your professional life and for your personal life. A simple example might be, once again, the code review or maybe, let's say, internal meetings. How do you judge the quality of your internal meeting? Now we're not talking about the outcomes that affect the kind of goals of the meeting. We're talking about the process itself. Do you have a feedback mechanism that helps you understand how effective the meeting itself is? If you don't have this, it's very likely that you are judging the effectiveness of the meeting inconsistently. So if you were to look at all these different categories and imagine how can I determine whether these categories, these models of behavior, if they're designed well or not, what are some of the measurements I could take? For example, coming out of a meeting, one of the measurements you might be able to take for an internal meeting is, does everyone have clarity on what is going to happen next? This seems like a very simple question, but if people don't have clarity, we'll focus in on this one specific kind of steering mechanism. If people don't have clarity, let's say you gave out a quick survey at the end of every meeting you found out that people have a two out of seven clarity on average leaving that meeting. Now you have the opportunity to use data to design a meta model. For example, you might look at your meta model definition wherever you keep that, maybe that is just a list of kind of principles that you keep or something like that. And one of those principles might be whatever the process is doesn't create friction if there are subsequent steps that are resulting from this process. So this is a very good meta model way of abstracting that principle of does everyone have clarity about what's next? That meta principle is that we don't want to impede forward progress as a result of a particular step in our process. And so if we apply that to this process, then we might add to our meeting model or internal meeting model that we have a section of five or ten minute section at the end of the meeting to clarify any remaining questions. Similarly, you can take the same meta model concept. And once again, since we've steered this meta model from our internal meetings, we're going to get this interesting effect of improving other processes by virtue of them having similar kind of parallel concepts. So we could apply this to our code review process. And if we didn't have this before, we might leave comments in our PRs that are unclear, maybe it's not certain if the comment needs to be addressed now or in a later PR. By applying this idea that none of our models need to impede progress on the next steps, we can focus in on those comments and say these are impeding progress to next steps. And so we need to create some kind of clarity, change our PR model the way we do reviews so that if you ever leave a review that is non-blocking, you mark it as non-blocking. And then if you leave one that is blocking, you mark it as blocking. The specifics of course are going to depend on your situation. They're going to depend on what you learned through that steering process. Hopefully you can see how this steering from one process that may be completely distant from another and using that metamoddling up at the top can improve in parallel areas and you have this kind of cascading improvement effect. Thanks so much for listening to today's episode of Developer Tea. I hope you like this idea of metamoddling. It's a little bit cerebral in a way because there's so much to kind of keep in mind and the different layers of designing these processes. But hopefully this makes sense. And hopefully it is encouraging some of you to create some of these steering feedback mechanisms and start thinking about metamoddling as you move into the new year. Thanks so much for listening to today's episode of Developer Tea. This show only exists because you listen to it. If you want to get a little bit closer to the Developer Tea community, you can ask me questions, have discussions about these episodes, ask questions that are totally unrelated to things that we've talked about before. You can even talk about your career and issues that you're having, get some feedback and advice from other members. Go and check it out head over to developertea.com slash discord. Of course that's 100% free and it always will be. The only thing we ever really ask you to do for this show is to leave a review in whatever and listening system you participate in. So thank you so much for listening. Thank you for the reviews and the ratings. Those really help us out. Till next time, enjoy your tea.