From Silos to Studios: A New Playbook for Building Faster, Smarter Teams

Your org chart is obsolete.

It's time to move from silos to studios. The old model of large, hyper-specialized teams is now a liability. If you're looking to leverage AI to innovate faster than the competition, this episode offers a playbook. Product leader Nate Gosselin introduces a new model inspired not by corporate frameworks, but by hip-hop producers. He breaks down the concept of the "Product Producer" and the shift from rigid, siloed teams to small, nimble "studios" powered by AI.

This conversation provides a tactical guide for leaders to rethink their org design, tackle technical debt, and build the multidisciplinary teams needed for the future.

In this episode, you’ll learn:

(01:31) What is a "Product Producer"?

(05:12) The Skills to Hire For: AI Literacy and Multidisciplinary Talent

(11:09) Managing AI Like a Team of Highly Capable Interns

(13:30) Where to Start: Low-Risk, High-Impact AI Projects

(16:33) How to Use AI to Tackle Your Scariest Tech Debt

(20:49) The New Org Model: Shifting from Assembly Lines to Creative Studios

(31:53) Your First Step: A Practical Guide to Implementing AI in Your Workflows

(39:08) The Personal Transformation for Leaders: From Specialist to Builder

View the episode: Youtube, Spotify, Apple

Full Transcript

Nate Gosselin (00:00)

There are these levels of specialization. What I'm noticing is that they are becoming less and less valuable because the kind of... the breadth that people are able to achieve with AI is really starting to change.

Keith Cowing (00:20)

Welcome to Executives Unplugged, where we dive into the stories, strategies, and playbooks of top leaders. The era of hyper-specialized, siloed tech teams is over. The future belongs to nimble teams that move incredibly fast. My guest today, Nate Gosselin, is a product leader who is building AI-powered software and teams. He has scaled organizations in various environments, including healthcare and media, and he has been at the forefront of this recent shift with AI. In this episode, Nate explains his vision for the new model of building great products and what leaders can do right now to build the teams of the future.

Keith Cowing (00:57)

Nate, welcome to the show.

Nate Gosselin (00:59)

Thanks for having me. Great to be here.

Keith Cowing (01:01)

Awesome to have you here. We've had a chance to work together in the past and I've seen some of your work recently. You've run product teams. You've worked with AI-inspired products and organizations. Right now you're helping a lot of people on transformations that they're going through. And one of the sparks for this episode was an article that you wrote that resonated a lot with me around "product producers." And as everything changes, we have to rethink what our teams look like, what our titles look like, what our roles look like. And so in your mind, what is a product producer and why is that interesting?

Nate Gosselin (01:31)

I'm a big hip-hop fan and I've always been drawn a little bit more to the kind of, like, the beats and the producers more than the MCs and the lyrics. Like, for whatever reason, that's the way my brain works. And I saw an interview with RZA from Wu-Tang, who is responsible for all of their beats, all of their background music, and I remember him saying that he, like, just decided to start studying music theory in, like, '96. I was like, "Wait a second, there's something off about this." And then I realized that he had already released, like, three platinum albums by the time he had decided to start learning music theory. And I was like, "This is such a crazy thing." Like, what is under the hood here? Like, why is this producer kind of speaking like this? And you know, the more I was digging around it, it's like, so much of this was about the idea of kind of shifting what it meant to be a musician. You know, in the '80s, samplers came out and they just kind of changed the game. Like, before samplers came out, you needed to actually be a technician to make music. You needed to know how to play piano. You needed to know how to play drums. Or you needed to be able to pay people to do those things for you. And with the sampler, all you needed was this $100 device and a bunch of records. And now you could sort of copy-paste and collage this music. And that was really the beginning of hip-hop; now all of these people who didn't necessarily have access to musical training or instruments or any of those very expensive elements now were able to, like, express a craft in this very, like, kind of collage-intuitive way. And it became way more about sort of curation and taste and vision than it became about sort of technique or, like, hard skills in a way. And I think we're at a really interesting point here where we're seeing a similar transition for product developers, where up until now, product has been, or just tech generally has been, such a technical craft. If you're an engineer, you have to really deeply understand how computers work, the syntax of different coding languages. If you're a designer, there's all of these tools and sort of practices that you need to develop. With product managers, it's all about sort of development frameworks and prioritization frameworks. It becomes this very, like, technical thing. But in this world now with AI and AI-enabled tools, we're kind of cutting out the need to have that level of technical ability, where you can sort of prompt something and say, "Hey, I want an app that does XYZ. I want it to feel like this. I want it to do this." And in my mind, it really echoes a lot of what I saw with music and hip-hop where, you know, they went from this, like, very technical craft to all of a sudden they could do collage, and all of a sudden we have this entire new art form called hip-hop. ... Where in software, I think we've been building these very... almost like workman-like, I think, apps for the past couple of decades. And as now we're kind of short-circuiting or enabling people to be more creative in code, I'm curious to see what comes next. I think we're gonna see a lot more personalization, a lot more creativity, a lot more, I don't know, joy... in the work that we're seeing. So that in my mind is really this idea of the product producer. It's a person who I think has, like, taste and a vision for what an app or a piece of software should feel like... should be to people, and is able to just execute that with AI.

Keith Cowing (05:12)

And so if you take that and you think about a leader in an organization, a startup, a bigger company, hiring people, looking at talent, how does this impact what you would be advising people to look for in terms of skills for people that are on the team?

Nate Gosselin (05:28)

There's this very specific thing right now that... I feel like is kind of the foundation for all of this. And it's just this idea of, like, AI literacy. Like, you can see it when people are trying AI for the first time; the first couple of uses of it, they're like, "Well, this kind of feels like Google." And then everyone has that moment where they're like, "...I get it now." And that moment is when you figure out how to ask AI for something really valuable. Like, all of a sudden, AI is unlocking whole new domains for you. You can sort of say, like, "Hey, explain to me the basics of XYZ," and now you're off to the races. You're kind of learning by doing instead of having to do all of that, like, foundational research first. So I think there's this, this idea of, like I said, AI literacy, where it's people who understand the basic sort of components of AI, like how do you work with an LLM? How do you ask it questions? How do you get it to do what you want it to do? And also, I think, how it fits into workflows. You know, there's so much talk right now about... like, AI stealing jobs or AI automating away jobs. And I think AI is automating away parts of jobs, but it's not the whole pipeline. You know, I think a lot of what people who are doing well with AI are recognizing is if they can be really good at mapping out a system, they can, you know... iteratively sort of cut out little pieces of that and make it more efficient.

I think the thing on maybe a longer-term time horizon, especially as it relates to software and product teams, are... I think people who are starting to sit in the Venn diagram between one of the three disciplines, you know, they're like the product manager/designer or the engineer/product manager. 'Cause I think what we're starting to see is, like, we used to think about a product development team as a PM, a designer, a tech lead, and, you know, three to five engineers. And what I actually think we're moving towards is maybe two to three people per team, where everyone is doing a little bit of everything. You know, you have maybe someone who's nominally the PM, but they're also capable of, you know, using AI tools to, like, create a design for a basic feature. Or you have an engineer who knows how to... like, review a product brief or generate a new sort of outcome for a given user. And I think what's cool about that is... it's valuable to have someone who leans a certain way to be able to set those guidelines. A designer is able to create a design system. But then with AI tools, a really talented PM and engineer can leverage that system and those rules without necessarily needing a designer to design every tiny little action. "I need a button to look like this." If you have the system, you can execute it. So when I think about zooming back, what are the things that I'm looking for in skills when it comes to hiring in these places? I think one is this AI literacy or this kind of intuitive understanding about how to systematically apply AI to your workflows. And I think the other is... kind of using those AI tools to start to bleed into the different disciplines to be a little bit more multidisciplinary than just, like, "I am a PM, I am a designer." And frankly, like, I think you're seeing PMs start to do that first because PMs in some ways have always been the ultimate generalists. Like, your job is basically to fill gaps on a team. ... But I think, you know, with time we're going to start seeing that happen in all the different disciplines.

I operate a small product studio where I build sort of niche software products. ... And a lot of what I've been doing is figuring out, like, how do I set up my rules and my systems, and how do I have a really opinionated way about how software is built, so that then I have more confidence about turning AI loose in that space, knowing that it's going to follow the rules that I already set. ... And I think that's really sort of the mind shift that you need to have, especially when you're unlocking sort of new, or you're building new parts of the code base or you're refactoring parts of the code base, is you need people who can go in and build, but also be aware of what's the scaffolding that they're putting in place behind them. ... And I think it becomes really exciting in the sense that, there's, and this comes just from building myself, but there becomes a bit of a dopamine hit as you start to, like... Like, "Well, the first time I'm going to implement a function or implement a new piece of code, I'm going to think about the rule that I want to write for it first." And then I'll use that rule to write the function. But then the next function, I'll use that rule again. And, like, you over the course of maybe writing five functions or six functions, all of a sudden you're seeing the AI is just getting better and better at this task. And I think that's sort of the excitement now of building these systems, is like, you're kind of building the scaffolding and the product at the same time. And that to me is sort of a new, just a new way of thinking.

Standard disclaimer here: I am not an engineer. I'm very obviously a product manager, but I do think this is a place where we're starting to see you can put more trust in sort of rules and code standards that are actually enforced by your LLMs instead of requiring or relying on humans to have to stick to those standards.

Keith Cowing (11:09)

And it reminds me a lot of when people would change from being an individual contributor to managing a team, where you could have it all in your head and, to a certain extent, as long as it came out great, that was okay. But then when you manage a team, you have to really focus on, "Now I need to put some structure in place. I have to be very clear about expectations. I have to set the goals. I have to be clear about where I'm going to make decisions, where they're going to make decisions. I got to give them feedback." It's a totally different job. And now moving from creating on your own to creating with...

Nate Gosselin (11:24)

Yeah. Yeah. Yeah.

Keith Cowing (11:38)

...AI, you have to tell it what to do. You have to give it clear objectives. You have to give it feedback. You have to create structure around it, or it'll run all over the place, and that doesn't work. And so there's a lot of similarities in terms of communication and precision to just managing a human team.

Nate Gosselin (11:48)

Totally. Someone described LLMs as, like, a highly capable intern. You know, they are so willing to do everything in front of them. They will research anything, but they will almost undoubtedly do too much or too little in one spot. You know, like, you need to really micromanage them... to get them through it. And I think that's a really good way of thinking about it. It's like you're... you're basically teaching people how to build an army of interns around them so that they can be more effective at what they're doing.

Keith Cowing (12:29)

And they need really clear guardrails. They need a lot of support. They need help. They maybe don't need lunch-and-learns and kombucha, but... they need very clear... at least not for now. At least not for now. ... And so one thing within that is when you talk about the producer, you talk about a lot of creative, innovative things coming out and maybe be more forward in what's going to drive the difference between people and teams.

Nate Gosselin (12:32)

Absolutely. No. No, no cold brew on tap. Yeah. Yeah. Yeah.

Keith Cowing (12:56)

There's a certain chaos that comes with that. You think about hip-hop and its creative explosion. There's a lot of chaos that comes with that as well. And so when you're working with teams today on the ground, you've got stakeholders that want predictability and demand it. And then you've got, "Well, you need to innovate," and there's a little bit of unpredictability. You can't completely control it. That's part of the magic. You have to be just controlled enough, but also let it loose or you're never going to break through. How do you advise teams...

Nate Gosselin (12:58)

Yeah. Absolutely, yeah. Yes.

Keith Cowing (13:25)

...to balance that predictability versus the innovation that is messy?

Nate Gosselin (13:30)

Being smart about... the places that you start. And I can be specific about this. I think there are certain aspects of product development that... lend themselves really well to this type of experimentation right now because I think they're relatively low-risk and you can sort of put... really clear guardrails around them. One example, I think, is things around testing and refactors, like the unsexy, under-the-hood things that can really slow down teams. It's like... in the past, you would need to put a... full development team on a refactor for a month to get whatever monolithic piece of garbage that you're trying to get rid of to this new state. And now, AI is so good at finding all of the different import statements or whatever it is that makes those things so tedious so that you can take one person and say, "Go refactor this for a few days or for a week." And all of a sudden, you're able to more quickly, I think, modernize your code and run more quickly at those things. I think another space that is really exciting and that I'm starting to experiment with with one of my clients now is this idea of automated testing where, you know... in the past, they may have had a QA engineer. Like, there are several now AI-enabled QA tools that will do all that end-to-end testing for you. It kind of... and it'll throw an alert if something is broken and it allows you to really sort of see what's there. And in some cases, it'll, like, diagnose the issue for you. So you'd be like, "Oh yeah, that was the fix. I'll just do that really quickly." So it's like those types of things that I think... like refactors and testing, those are the concepts that I think were more of a drag on speed and development among teams in the past. And those are the things that are starting to automate away to allow teams to stay in more of that creative, like, outcome-oriented headspace. ... I think the other place where this lends itself really well to this is in front-end. Like... I always felt really bad when I would ask an engineer to change something on the front-end. 'Cause it always feels like such an annoying thing, where you're like, "Can you move that button three pixels to the left?" You know, there's always like a really annoying little thing like that.

And I think now, because we have such a kind of mature concept of front-end and back-end, a lot of the resiliency and risk is more on sort of back-end APIs and services. You can sort of turn a PM and a designer with Cursor loose on the front-end to implement new features or new displays. And I think that's where you can also see a lot of creativity where, you know, the back-end information that's being provided isn't changing, but now you have a lot of room to play with that, adapt it, and experiment with new ways of showing that content.

Keith Cowing (16:33)

And it's interesting you point out tech debt. As mentioned, I hear a lot of people talking about going from zero to one with prototyping, which is much more on the front-end of how do you prove something? How do you iterate on it? The tech debt isn't one I've typically seen or heard recently. Really interesting that you could use that as a way to say, "Hey, let's prove some value here and really show some business impact." What would you advise a team that wants to tackle tech debt? It's scary. You open it up, you're halfway through a project and it's harrowing. You got to make it through, but all of these different things, it's always, you know, it's like digging up a building and you find all this stuff under the ground you weren't aware of. And then it changes your plan and it takes twice as long, twice as much money as you think it does. Those are never fun projects. How would you recommend somebody tackle tech debt with AI and a single person or two people? You know, make sure that that proves worthwhile.

Nate Gosselin (17:00)

Totally. Absolutely. Yeah. If you think about it in sort of a... and maybe this is too far under the hood, but like, if you think about it just in terms of libraries within a given repo, like, start with a small library and use that as your testing ground. Like, that's sort of your bounds. Be really clear on what you want that interface to be. And then just, like, let them run at it. I think in terms of the person doing it, like... I think what's really powerful about AI is you can say... you can tell it, like, "Hey, this is what I want to do." And then ask it, like, "Analyze my code base. Tell me all the places that I'm using this old... you know, maybe it's an old version of something, or maybe it's an old kind of pattern that you want to change." It's like, "Now create that plan to fix it. Now execute that plan." I mean, what's cool is, like, at each part of that process, there's still a human-in-the-loop gate. You know, when you sort of say, "All right, show me everything in the code base that does this," you can look at it, be like, "This doesn't feel right," or "This does feel right." Like, "Have you checked here? Have you checked there?" You can do a little bit of that intern-level checking their work. But I think what it takes out of the human's brain and puts on the AI is more of the, "Okay, I need to remember all the different import statements that exist across this code base. Or I need to remember all the different places that I changed this function call." Like, it's those things that I really think make the refactors painful. ... So I think it enables people to spend maybe a little bit more time up front on the interface design. Like, what do you want this library, what do you want this piece of code to offer... in the future? And now you can sort of give that pattern to the LLM and give it the code base and say, "Help me come up with a plan for doing this." So I think it enables us to focus a little bit more on the outcome, you know, which is catnip to a product manager. Like, what do you want this piece of software to do... and I want to do it in this way with this pattern, and I think that will kind of unlock those things.

Keith Cowing (19:33)

And I love how you broke it down. You've got the tech debt, you've got the front-end stuff, which I think is a little bit more obvious. And there's a lot of tools around that. And then you have your work around testing, which also nobody wants to do, but it's really hard to do well. And so, you know, by definition, people start as a Software Design Engineer in Test. It's like a junior-level role. And then you graduate out of it as quickly as you can, even though to do it exceptionally well is actually quite hard. ... Just nobody wants to continue doing it. And so, using AI to help with stuff like that is, I think, phenomenal, especially for non-deterministic outputs where testing becomes an entirely new field where we have evals and all these things. This is going to be a ripe opportunity. And so you mentioned a few areas where you can apply AI, where you can have a single-person team, a two-person team. Let's talk about culture change a little bit for organizations that are going through this. You have to transform. You have to get the organization used to what it looks like to have a two-person team.

Nate Gosselin (20:12)

Yeah, absolutely. Absolutely, yeah.

Keith Cowing (20:29)

What are you seeing as some of the biggest headwinds that companies and orgs have to work through, whether it's their org chart or how they do performance reviews or how they communicate, how they reward, how they give people the right projects? What are you finding as the tactical things on the ground that they have to get right to unleash this?

Nate Gosselin (20:49)

For so long, we've thought about software engineering teams as almost like an assembly line, you know, where it's... it's a very... We call them software engineers. Like, the whole thing is about engineering a very resilient system. I almost wonder if we're transitioning, not fully, but partially, into this idea of a studio or an architecture studio, where it becomes more about sort of independent projects... or smaller projects that reflect a greater set of guidelines or a larger ethos or a set of principles on design. And that is sort of the shared concept of the team. And I realize that's kind of heady, but I think what I'm getting at is... As software teams grow, I think you start to see a lot of stratification and specialization in terms of the teams. Where all of a sudden you have your back-end team, or you have multiple back-end teams, or you have your platform team, and then you have your data pipeline team. You start to see this real... specialization and sort of inability to move across those elements. I think what's really interesting about the way... things like Cursor rules are changed. So for those of you who aren't aware of Cursor rules, basically what you can do is say, "I want all of my tests to be written in this format," or, "I want all of my JavaScript files to be written with these sort of style sets." ... And then whenever Cursor writes code that is a JavaScript file, it'll use that set of style guidelines. ... Where that gets really powerful is things like comments, because you can sort of say, "Hey, whenever you're writing a function, just always write comments, explain exactly what it is." And over time, what's happening is now, whereas before you needed to deeply understand how a given file worked, all of the context for the LLM is already in the file through comments. So you can sort of say, "Hey, Cursor," or, "Hey, Claude, what is the interface? How do I use this function?" Or, "How do I use this capability in the profile?" So you almost don't need that same level of code understanding and code specialization because the context is all right there and it's readable by the LLM. So I think, and granted, there's a gap between where we are today and the world that I'm describing, but I think this is the way that we can start moving towards it and thinking about it, is we don't necessarily need teams to have deep context on back-end systems. I think everyone can have a basic understanding of the platform and the foundation. And then the combination of a good software engineer, ... something like Cursor, and then... like, really good commenting and documentation, which can be written by Cursor as it's doing it, enables anyone to really jump into the code and... fix problems as they come up. So I think what we're starting to see is maybe more of this studio or rotation model, where rather than owning a specific piece of the code base or owning a specific part... of the system, you're almost able to have rotations, which is something I think all engineering teams are trying to do right now, where it's like... a team gets bored, they rotate onto another team or another space, but it doesn't have sort of the same level of flexibility or fluidity that I think most people want. This type of world where the code is almost self-explanatory starts to free people up to be a little bit more... maybe customer-facing in terms of the types of software that they're building, where it's less about, "Okay, I'm working on a very abstract piece of code that I don't really understand how people use," and you're like, "Actually, how am I using our existing platform to deliver value to someone?"

I'm looking at this with very rose-colored glasses. I'm sure there's a couple of engineers listening to this being like, "What are you talking about? You're an idiot, this is crazy." But I do feel like we're moving to this world where you don't necessarily need to have... the same level of context on a, or the same level of deep understanding on a given piece of code or software. And I think that unlocks... the ability to be more flexible, to rotate, to be more creative in terms of what you're building.

Keith Cowing (25:23)

And you're saying nobody needs that context, or you're saying there's a few teams or a few people that need that context to manage that, and then a lot of other people can be unleashed?

Nate Gosselin (25:31)

I think that's a place where I'm... It's a place where I have the question too. You know, like, I'm articulating sort of an extreme vision of the world. The reality is probably going to be a little bit closer to us than what I'm saying. And you know, the thing that I'm looking to experiment with, with the teams that I'm working with right now... is what does it look like to have... almost like central architects... that enable these kind of satellite teams to be a little bit more fluid. ... And I mean, we used the studio analogy earlier. That's similar to how architecture studios operate, where there'll be some sort of standards or central standards person who's saying, "You know, this is within our design standards, this fits who we are as a studio." So even though we're working on a bunch of different projects, they all meet a certain level of quality and style. And I think we can do a very similar thing with software products, where having a central architect or team of architects who is really responsible for... code structure, code style... kind of... like general sort of patterns and interfaces. And then you can even play that forward to design, to having sort of a central design team or design expert who's focused more on design systems, like, you know, spacing and typography and core components and basic, and again, like patterns, I think then enables... a larger team or a larger sort of array of small feature teams or project teams to then leverage those systems and leverage those tools to build a lot of different pieces of software.

I'm a heavy user of Claude projects and Cursor, and so much of my workflow development with it has been sort of, you know, figuring out, like, how specific do I need to be about a rule here or a request here? The first time someone interacts with it, they probably ask a very broad question. They're like, "This is wild." Like, "I got an answer about what all this stuff is." But then they realize that if they ask kind of broad, vague questions, they get very general answers. And they're helpful answers, but they're general answers. And the power becomes, like, "Okay, how can you take that initial general answer and then drill down into things that are much more specific and much more targeted?" And that's where, like, AI, I think it's really exciting.

Keith Cowing (27:58)

And you talk about having a prompt and a good prompt versus a weak prompt. And it gets to the language in large language models where, again, it's not that different from your first-time manager and you delegate for the first time. Like, "Well, that was awesome." They went and did the thing, but it comes back and it's like, "Well, that's not really what I wanted." And it's like, "Well, it is what you asked for because you aren't specific enough to be very clear." And then you have to come up with a better prompt. And the funny thing with LLMs is it's actually great training for...

Nate Gosselin (28:11)

Yeah. Yeah, exactly.

Keith Cowing (28:23)

...giving direction to people because you put in a prompt and you get immediate feedback. And it's like, "Oh, that wasn't specific enough. I need to be more clear." And then you give it some constraints and you give it more constraints and you tell it what not to do. And you tell it what to do and you give it some good examples. And then all of a sudden it gives you a great response. You're like, "Oh, okay. I guess my prompt just sucked." And a lot of people spend two minutes on the prompt. Maybe you should spend 20 minutes on the prompt. And it just teaches you in general, that being very clear about your prompting for all things, whether it's your team or whether it's AI. And there's another funny...

Nate Gosselin (28:33)

Yeah. Yeah.

Keith Cowing (28:53)

...sort of paradox in what you're describing, which was creating all the rules and creating the structure for AI to work within. And there's working in the system and then there's working on the system at a company. And the old bureaucratic companies came up with tons of protocols of, "If you travel, here's your budget. You can spend this much at a hotel in California. You can do this, you can do that." It's all rules and procedures because without that, the view is people run amok. And then... the world has sort of shifted towards more empowerment, more creativity, et cetera. But what you're saying is you actually need to bring back all of those rules and protocols, but not for the humans, for the machines, because then they can go work within this system and we can work on the system.

Nate Gosselin (28:58)

Yeah. Yeah. Yeah. Yeah. Right.

Nate Gosselin (29:29)

Yeah. Exactly. I think that's a great way of framing it.

I think now people are also realizing it's not just prompt engineering. It's also, like, context engineering. Like, if you give an LLM too much context, it's going to give you a terrible answer. You know, so it's like, how do you figure out that right level of detail, to your point about the interns, of like, "Okay, I don't need to tell you everything that's happening in the C-suite. I need to tell you that our chief product officer wants this, and here's the very specific thing that I need from you." You know, you don't need to worry about all the other organizational context that's happening there.

What I find really interesting is that, and this goes back to the kind of product producer idea, is, like, before samplers, in order to be a musician, you needed to not only understand an instrument, but you also needed to understand music theory and scales and chords and all of the things that go into creating music that sounds okay. And then once the sampler came out, you were able to focus a little bit more on feel and, "What's the emotion I'm trying to create here?" without necessarily needing to master all of those technical skills. And I think we're seeing a similar thing now in software where, you used to have to understand all of this real... kind of intense tech around your back-end stack and deployments and testing and all of these things just to make sure that the code works okay. ... And it would keep you from building the actual useful part of the tool for people. And I think now those sort of back-end, tedious tasks are the things we can start to delegate to our army of LLM agents, enabling us to focus a little bit more, I think, on the user-facing outcomes and the user experience of the tool.

Keith Cowing (31:23)

You've painted a picture of what the future looks like with this studio. You talk about how to use AI to be very structured to work in the system. You can have creativity on the system. You can work in a few different areas, but now to get from here to there... What are you seeing people do that's effective to iterate towards the org model that will enable that to succeed and to work? What should leaders be doing literally right now as incremental bets to shift the way that they operate?

Nate Gosselin (31:53)

The steps, to me, are like: have your teams just map a workflow. Like, whatever the process is, it can be, you know, "How do I go from product brief to delivered code?" It can be, "How do I go from... creative brief to trafficked ad?" and take that workflow and then look at it with your team and look at it with someone who's AI literate and say, like, "Okay, which pieces of this workflow can we automate?" Pick one, try to get a first version of that out in a week, and then, like, iterate on it until...

Keith Cowing (32:30)

And that's one person? You're assigning this to a single producer, if you will?

Nate Gosselin (32:32)

It can be one. I think that's one of the things that's exciting about a lot of AI tools right now is that you don't necessarily need to be a technical person. You just need to be... you need to know how to ask Claude for the right thing, you know? ... Like... there's this platform right now, N8N, which is sort of Zapier on steroids. And it's fairly technical, but, like, if you give an LLM the docs and say, "Hey, I need a workflow that does this," it'll give you a 60 or 70% version of that workflow. And then you just need to sort of tweak it around the edges 'til it works. So there's already a lot of tools available for automation that, I think, are accessible to people who aren't necessarily engineers, but are, like, I don't know, maybe tech-curious or have a bit of a sense of, you know, maybe they like writing Excel formulas. Like, that's the perfect person to start doing these things because I think it's that same level of sort of understanding. But to your question, it can be one person, it can be two people, but the key is to understand the overall system and then systematically pick out small pieces of it to implement AI into the system or build up an AI system that handles that part of it. Because I think that gives you the confidence to say, "Okay, well, first version wasn't that great. Second version, better. Third version, we actually don't need to touch this anymore. Like, this is... I hit a button and then I get what I need." And now you can sort of move on to the next part of the system. And I think having that sort of view of your workflow and that understanding of, like, "Okay, which part of these systems... Which part of these systems can I delegate to an LM or an AI agent?" ...enables you to have a much more intelligent view of who do I actually need to hire. Like, you may find that you don't actually need to hire more bodies. You need to hire, like, 50 agents. Or you may find, "Actually, what I need is, you know, a couple of people who are really good at creating style guides and then my AI agents can do the rest." You know, and I think that's sort of the angle that I think the smaller teams should be taking on these. ... It's just being really smart. Like, before you open a JD, make sure that you've sort of checked it against the system and asked, you know, "If we need to dial this up, where are the bottlenecks? And are those bottlenecks solvable by a really good use of AI?" And like...

I think for larger organizations, it's a lot harder, obviously. You have... I think part of the reason you're seeing so many layoffs is people are going through this idea of, "I can automate that." And that's what I think... where a lot of this sort of existential fear about AI is coming from. But I think the basic idea is still the same. You know, it's like, take a small project, give it to one or two people and ask them to solve it with AI. You know, I think having a clear sense of... that overall workflow and that overall system that they need to fit into will help. I think the difference with the larger organizations is you then also have the change management... aspect on top of that. Like, you may find that when you start automating things, you're going to unlock part of your system or a part of your team that may not make sense anymore. So what do you do? Do you retrain them? What do you do in those scenarios? And I think that's what's gonna make it much harder for the larger teams and the larger companies to sort of adapt to this. But I think those teams that are in sort of that, you know, 50, 100, 200 sort of person space are really well-positioned right now to take advantage of this and scale in more of, like, an AI-native way.

Keith Cowing (36:32)

And we're talking about this stuff. You're playing with fire. It's super powerful, but you're going to burn yourself a little bit here and there until you figure out how to use it. I subscribe to the idea that everything is learnable, but not everything is teachable. A lot of times you have to self-discover things. And so companies as organizations have to self-discover...

Nate Gosselin (36:38)

Absolutely. Yeah. Yeah.

Keith Cowing (36:48)

...what works here, what doesn't work here, where is this powerful, where is it not, how do you approach it? And you just have to start that learning curve because you have to climb the learning curve and you can't start at the top. You just have to go. So with that in mind, what are some examples of things that you've seen people try or that you've tried that didn't work?

Nate Gosselin (37:03)

One is maybe more, like, work... a little bit more team-structure-based, like going back to this idea of the producer and sort of how do you work through these different problems. I was working on a team that was trying to build an AI experience. The idea was to be able to quickly get an AI explainer of different parts of the page that you're on to help you understand the product that you're working with. And they basically had a creative technologist and then they had a design team that were working on the same problem at the same time, but without a really clear sort of understanding of who was doing what. ... And basically, the design team went through all of this effort to build these Figma prototypes, they did all this design, they did all this user testing. They were like, "My God, this is the perfect thing." And then we actually tried to implement it and it just didn't work. Like, it was so off in terms of the types of responses that we get back from LLMs, the overall experience. Meanwhile, the creative technologists already had a working prototype that I think we ended up pushing to production, like, a week later. You know, and what was really powerful there was... I think we're used to sort of thinking about design first and then build the system to meet the design. We're at this weird state right now where a lot of the design is also in sort of the prompt and context engineering and understanding what's the type of response that you get back from an LLM. So you almost need to be... you can't do that more waterfall-y, like design, code, test. You almost need to do it all at once, but do it in a very small way so that then you can sort of build up on top of it and just, like, continue refining. It's like more about fast iteration through that development cycle than it is about your traditional cycle.

Keith Cowing (38:53)

Then, in terms of personal transformation, as leaders, we have to go through our own psychological adjustments here. And what's a belief that has worked well for you in your career that you've held, but could hold you back moving forward?

Nate Gosselin (39:08)

I think something that's changing as I look at this is an increased sense of confidence and sort of a blurring sense of disciplines. Like, I think so much of... you know, career understanding in the past was about... you always start as a generalist, but then you start to move into specialization. Even product managers are kind of the ultimate generalists. And then you're like, "All right, well, maybe I'm going to be a SaaS PM, or I'm going to be a B2B PM," whatever industry you choose. And similarly, even within an organization, you're like, "OK, am I going to be a sales marketer? Am I going to be a product marketer?" There's these levels of specialization that I'm noticing are becoming less and less valuable because the kind of breadth that people are able to achieve with AI is really starting to change. Like before, when I would look at myself within a... like, a product team, I would be like, "Okay, well, I have opinions about design. I have, like, questions about code structure, but I don't necessarily know how to visualize or articulate or how, like... like, 'This is the way I think it should work.'" And I think now, because you have the ability to test things out yourself with AI, I think we're starting to see people kind of blur those. So for me personally, it's like, I'm almost starting to move away from the idea of "product manager" and thinking about myself more as a "product builder," where depending on the construct of the team, I can take a different role. It can be around strategy, it can be around... you know, leadership, or it can be like, "Yeah, like, put me in, coach. I'll take Cursor and, like, I'll crank out some features for us." Like, I think what's changing is really this idea that we need large teams or large specialized teams to have impact. I think we're getting to a point where small teams with a real willingness and ability to learn are now able to have outsized impact on what they're doing.

Keith Cowing (41:23)

I think underlying that is something we all need to let go of, which is this sort of almost philosophical alignment with a function where you identify as fitting in this silo. And guess what? That created the silos. ... And, you know, you go back to the '90s and they didn't have silos. They just had a team working on stuff and then a manager who barely had a title was just "the boss." And then the 10 people that built the stuff, and everybody was full-stack because there was no definition of different parts of the stack. You just did the thing.

Nate Gosselin (41:30)

Yes. Yeah, exactly. Yeah.

Keith Cowing (41:50)

...I think it's going to be really healthy to start to break down some of those silos. And part of breaking down the silos is not getting them to work better with each other, but just literally merging them, in my opinion, and saying, "Hey, let's see what it looks like when you have different personalities on the team," and you should lean into those. You want somebody who's more on the creative side and somebody who's more on the... you know, risk aversion and process side, and somebody who's really detail-oriented and somebody who loves talking to customers all the time and whatever that blended personality is. But there's some crossing skill sets around asking questions and making sure you know where you're going and communicating really clearly, et cetera. And I think it's going to be really, really healthy for teams. And you can't look at the old model and say, "How do we evolve it?" Once in a while, you get a zero-based budget and say, "If we started fresh, let's, like, throw out these old titles and let's talk about what the future looks like." And, um...

Nate Gosselin (42:18)

Absolutely. Yeah. Absolutely. Yeah. Yeah, what would it look like?

Keith Cowing (42:40)

As an example on my side, I've always been a perfectionist and it's something that has served me well in terms of quality and focus. But there's always a trade-off of quality and velocity and you always needed to manage that. But I think right now the importance of velocity is so high that I need to let go of a bunch of that. And if you don't move today, the world's just literally going to pass you by by the time you get something out. And so you got to crank up your iteration cycle really, really high. And within that, you can still care about quality and you should...

Nate Gosselin (42:56)

Yeah. Yeah.

Keith Cowing (43:09)

...and you need to. But if velocity is not extreme right now, you're just gonna get left in the dust. And so you really gotta, like, ship and then figure it out and then just keep doing it.

Nate Gosselin (43:19)

I agree, like, velocity is the most important thing right now. I also, I have this sort of aversion to it. It's like, I don't think the answer, to your point, is to just... blast. Like, the answer isn't to take the shotgun approach. The answer is, like, okay, just, like, much smaller iteration cycles. Like, you can get something out that's usable and useful faster than ever. Like, do that. Like, don't feel like you need to get to the 90% or the 100% edge. Like, get 70% out there. It'll work, people will use it, and it'll still be useful. Yeah. Yeah.

Keith Cowing (43:54)

Or at least you'll learn, and you'll learn quickly. And whoever learns faster is going to win. So that iteration speed is so key. Doesn't mean quality has to come down. Scope is your friend; managing scope. So high quality, low scope, go fast, fast, fast. And the people that are iterating, they're not overthinking it. They're just doing it are the ones that are really getting ahead. But Nate, this has been a fantastic conversation. I think you had a lot of great nuggets here that people can use as they're navigating rapid change, as they're leading their team, as they're setting up their structures for the future. It's exciting to think of a world where it looks and feels more like a studio and we have all these protocols in place, but that's for the machines and the humans can work on the system and be creative. And it's awesome to have an optimistic view on what the future of creating great products looks like. So thank you for...

Nate Gosselin (44:37)

Have fun.

Keith Cowing (44:43)

...joining me on the show.

Nate Gosselin (44:45)

Thanks for having me. This has been awesome.

Keith Cowing (44:48)

This episode was brought to you by the Executive Product Leadership Program at Cornell Tech, a two-day intensive program for product and technology leaders in New York City, where you will work with top AI researchers, industry leaders, and people like Nate and myself for two days of building relationships, facilitated workshops, and defining the future. Learn more at [https://kc.coach/cornell](https://kc.coach/cornell). I hope you enjoyed that episode. If you did, please share it with a friend and leave a positive review on your favorite platform. It's the best way to help the show. Until next time, enjoy the ride.

Next
Next

Driving Rapid Change Without Breaking Your Team: Mastering the Art of Cadence and Commitment