Check out the conversation on Apple, Spotify and YouTube.
What We’re Covering Today (1:40)
Aakash: Most product managers dream of building something big, but this one actually did. Laura Burkhauser is the CEO of Descript, the $550 million AI video editing giant. But she didn’t start as CEO. She started as an IC PM.
In today’s episode, you’ll hear the full path Laura took from IC PM to head of product to VP of product to CEO and the features that Descript shipped along the way to create that confidence in her.
Laura, welcome to the podcast.
Laura: Thanks so much for having me. Excited to be here.
T…
Check out the conversation on Apple, Spotify and YouTube.
What We’re Covering Today (1:40)
Aakash: Most product managers dream of building something big, but this one actually did. Laura Burkhauser is the CEO of Descript, the $550 million AI video editing giant. But she didn’t start as CEO. She started as an IC PM.
In today’s episode, you’ll hear the full path Laura took from IC PM to head of product to VP of product to CEO and the features that Descript shipped along the way to create that confidence in her.
Laura, welcome to the podcast.
Laura: Thanks so much for having me. Excited to be here.
The Features That Led to CEO (2:14)
Aakash: I want to get right into it. What were the features that you shipped that led to you getting promoted to CEO?
Laura: Yeah, well, for folks that don’t know, Descript is an AI video editing tool that allows you to edit your video just like a doc. And that’s how I would have introduced it a few years ago when I joined. I didn’t really know what I was looking to do at the time. And I started a podcast with my friend, and when I went to go edit that podcast, I found this software that I totally fell in love with, Descript.
Suddenly, I was a podcast editor, then I became a video editor. The best products out there, they don’t just do a job for you. They transform how you feel about yourself, and that’s what Descript did for me. So I knocked on their door, and I said, hey, are you guys hiring product people?
And this was an interesting time in tech. Let me share my screen. So that you can see, here’s what Descript looks like. So, you can see that I’ve recorded a video here. On my left is a transcript of my recording. On my right, that’s my video. Down here, I have the timeline, which I can, you know, make small, or, you know, do some traditional timeline editing in, or I can just get rid of it entirely, which is usually how I edit in Descript because I hate the timeline.
And I could just edit this transcript, and as I do that, my video will be edited as well, right along with it, right? So that’s the product as I found it when I joined, which is what everybody wished video editing really was like, as simple as that.
Aakash: Yeah, it was one of those instant “I’m in” moments, especially for people with longer form editing or a longer form video or with video that’s really script-based, because if you remember what it used to be like, you know, you’d be looking through here trying to identify the sound patterns. It was hell.
Laura: Yeah, and now we don’t look at the timeline at all for script-based editing. Yeah, so this is how I found it. It was already a magical product, already had PMF with a lot of people. But we were entering into this new age of AI. And actually, Descript has been AI native since our inception. But we used to never talk about it because people used to not care at all what the technology was that was undergirding your software as long as it worked.
Then we went into the great AI boom where everyone just started slapping AI on their websites, and everyone felt this big rush to integrate more AI into what they did. Now, we already had an AI native editor, but it also led us to ask the question, what do we do now that LLMs are generally available to bring into Descript?
Building the First AI Tools (5:04)
Laura: And the first major use case that presented itself, I’m gonna open up our AI tools toolbar over here. And you can see that we have these options: edit for clarity, remove filler words, remove retakes, and add chapters. And these are all really good examples of new kinds of editing that LLMs make possible.
Because LLMs are really good at working with language, right? They’re actually like, they can’t work with other things. They’re large language models and they’re really good at working with language. So, what do you do when you are a script-based editor? Well, great. Now you can feed the script into an LLM. Tell that LLM what your objective is, write a prompt, put that prompt behind a button, and now you kind of get multi-modal video editing for free.
So let’s click on remove retakes. And what you’ll see happen is it’ll automatically detect that I actually had a pretty rough start getting into this video, and it will seamlessly bring together the first part of this first take with the last part of the last take, knowing that that’s where people are often strongest. And I’ll have a new intro that doesn’t have a million starts. Here’s another place where I had to record twice, and it just got rid of that first one.
And you know what’s behind this is pretty simple prompting, but it’s just built on a real user need, and that was kind of our philosophy for how to build these AI tools: build them in these prepackaged, parameterized, job-based buttons that can give you a reliable result over and over again.
The Development Timeline and Technical Challenges (7:00)
Aakash: So what was the timeline of this feature? When did you guys realize, OK, retakes is one of the first use cases we wanna build. How did you guys experiment with LLMs and the prompting to figure out, OK, we can actually solve this, because I know, especially back then, one of the big concerns was context window. Can we even put in a whole podcast transcript into an LLM? It was a lot less possible back then. And then how did you guys eventually roll it out to users and have the confidence to ship it to everybody?
Laura: Right, with the context window, how we had to solve that problem was, can we send it over in chunks? And then, OK, that works for remove retakes, right? Because usually you can send it over in chunks that are meaningful enough that you can detect retakes. But that was a problem for buttons that you would assume would be able to contain a longer context window, like something like rewrite, where really to rewrite, you probably want to understand the complete journey before you rewrite it, right?
So we definitely had to understand the limitations of the technology at the time. You ask, how did we decide on the use case? Well, I think one of the great things about Descript is we already had a really deep understanding of customer problems, right? At this point, we had been around for a couple of years, we had watched people, person after person, go through the multi-hour process of editing a video, and we kind of understood, OK, you have your script readers and you have your improvers.
Script readers will have one of two problems. Either they’re reading off a script like this, in which case the problem that they have is that they’re not looking at the camera, or they’re reading off a script and then looking into the camera and delivering just that one sentence, and often screwing it up.
And so for that first kind of reader, we looked around and found a model called eye contact that we integrated into Descript, so you can just add eye contact and you can see here it is without eye contact, with eye contact, without, with, so that it looks like they’re looking at the camera, which is pretty important.
And then for people that are doing the read and then do a take, they tend to have a lot of retakes. So we sort of already had mapped out, here is the journey of being a YouTuber or a podcaster. Either you’re scripted or you’re not. If you’re scripted, one of these two things will have to help you. And if you’re not, then what you’re gonna want to do is edit for clarity.
Edit for clarity is the action that you want to use if, like me, the way that you record video is just kind of blathering into the camera for 18 minutes and then you want it to just take out all the parts where I didn’t know what I was talking about and make it sound really cogent and concise.
Product Thinking: Mapping Problems to Solutions (10:05)
Aakash: OK, so if I’m somebody trying to take the lesson from this, we have a clear mental model of what are some of the user problems that we aren’t solving today. In this case, some of them are eye correction and removing retakes, and then we’re looking at what becomes recently possible with the advent of LLMs, maybe being able to actually identify what those retakes are and this new model being released around eyes and incorporate that into our product. How did you roll this out to users and develop confidence that this should go out to everybody?
Laura: Yeah, so for AI tools, what we did was, you know, if you have a toolbar, you need more than one thing to be part of that toolbar. So when we announced the idea of Underlord, which is what we used to call this, but when we announced the idea of the AI toolbar, we thought, what are 6 killer apps that could be part of this toolbar?
Based on problems that customers have, problems we think we could solve well with the technology, or problems that we don’t think we can solve well with the technology today, but we think that over the next few months as the context window increases or whatever—we were at an age then where it felt like every two weeks things got step change better. I feel like now we’re sort of in the boring part where things are pretty good and each new model release is just incrementally better than the last. I hope I’m wrong and we get another banger in a month or two, but I feel like the last few, it’s been like, eh, OK.
Anyways, things were changing so quickly that we also did have a rolling backlog of: this is a really great idea, but it’s blocked because of technological limits right now, but we’ve already thought through a lot of how the product will work, so that as soon as the technology is ready, boom, we’re ready to go.
And so we kind of had that mapped out in our mind, and we had 6 that we wanted to go to market with. I believe those were edit for clarity, remove filler words, remove retakes, add chapters, and then a bunch of these published ones because they’re really easy, right, from a technology perspective. You need to write a good prompt, but drafting a title, summarizing show notes, YouTube description, social post, blog posts—those are pretty easy to do. The brainstorming ones, somewhat easy to do too. So those were the ones we kind of went to market with first.
Human-Driven Evals Before the Evals Era (12:36)
Aakash: You asked how we made sure they were good enough. This was before there were a gazillion thought pieces about how evals ought to work. And so a lot of it was human-driven evals, but by another name where you’re sort of like, OK, I’m gonna test this out on a bunch of production data and see if I would use the result as a customer, and I would ship it, and if I wouldn’t, don’t ship it.
[Sponsor Break: Maven – 13:07]
Aakash: Today’s episode is brought to you by Maven. The problem with most courses online, like Udemy, is there’s no live component, and the instructors aren’t experts in their fields, they’re professors. At Maven, you get direct live access to experts and operators from the world’s best tech companies. You can’t get that access anywhere else, in any university, and you usually can’t find them on YouTube either.
I’ve featured so many of Maven’s experts in the newsletter and podcast for that reason. To help you out, I’ve put together a collection of courses I recommend at Maven.com/X/Akash. This includes courses like AI prototyping for PMs, product sense for PMs, and getting an AI PM certification. Visit it now at MAVEN.com/X/AAKASH.
[Sponsor Break: Pendo – 13:55]
Aakash: Today’s podcast is brought to you by Pendo, the leading software experience management platform. McKinsey found that 78% of companies are using Gen AI, but just as many have reported no bottom line improvements. So how do you know if your AI agents are actually working? Are they giving you the wrong answers, creating more work instead of less, improving retention, or hurting it?
When your software data and AI data are disconnected, you can’t answer these questions, but when you bring all your usage data together in one place, you can see what users do before, during, and after they use AI. Showing you when agents work, how they help you grow, and when to prioritize on your roadmap.
Pendo Agent Analytics is the only solution built to do this for product teams. Start measuring your AI’s performance with agent analytics at Pendo.io/Akash. That’s P E N D O.io/AAKASH.
Staged Rollout and Production Data (14:44)
Aakash: So did you guys stage the rollout? Did you create a closed beta or something? Because sometimes that production data is the most important thing, and creating synthetic versions in your room, you don’t always replicate the funny things a user might do out in the field.
Laura: Yeah, now we have a lot of production data that we can work with, right? We use Descript a ton. We’re very intentful users, and I’d say for these ones, they were pretty straightforward. So yes, the answer is yes, we did launch them as a beta. We didn’t do private beta, we felt good enough to immediately go to public beta.
But honestly, these were pretty straightforward, and while we continued to tweak them and sometimes ran A/B tests when we tweaked them, that felt much safer than when a year later, this past year, we launched this. We launched Underlord, our agentic co-editor, who can do things in objective-based, who’s kind of like an open world agent.
And when you have a truly open world agent where people can type whatever they want into Underlord, the really great part of that is the emergence, right? My favorite part about building AI products is emergence and having the kind of product that allows for emergence. But if you’re allowing for emergence, you’re also allowing for a lot of whack stuff to happen in your product. And so that’s where production data is really important.
Measuring Success: Adoption and Retention (16:16)
Aakash: So before we get even deeper into Underlord, which we have to really diagnose for people how you guys built this open world agent, how did you measure success for the AI tools?
Laura: Yeah, so we looked at adoption and retention. Obviously more complicated than that, we have a little thumbs up, thumbs down that we can also use for training. But mostly what we did was, what success meant was, does this have—remove filler words was the very first action that we launched, and it actually used to live elsewhere and it was one of our most popular features in the product, and just a big success commercially for us as well.
And so we use that as our baseline. How does edit for clarity compare with remove filler words when it comes to adoption, and then even more importantly when it comes to retention. Because if this is a useful AI tool, then people will use it over and over again.
And the other thing that you can look at is not just did they apply it, but did they export with it, with whatever it did, in the export. So was it up to their quality bar that they want to include it in their final product.
Aakash: Which makes a lot of sense.
The PM’s Role in AI Features (17:35)
Aakash: So what were the teams behind this? How did you guys staff this and how should people be staffing it? Because I keep hearing a lot of concerns around what is the role of the PM? Is it blurring with designer and engineer? What does the PM specifically own on an AI feature like this?
Laura: Yeah, it’s a really fun time right now in EPD because just theories abound about which of these jobs are gonna make it, which of these jobs are gonna become the same job? Are we gonna need engineers in the future? Are we gonna need PMs in the future? Are we gonna need designers in the future? Anyways, whole long, beautiful monologue we can go off on there.
But I do think our roles are changing, and the role of the PM in a product like this is the same and different. So as a PM you always need to understand your customer and what success looks like for your customer, what the job they’re doing is, what the needs they have while they’re doing that job are, how they solve that problem currently, right? All that good product thinking.
And because you have all that good product thinking of job, circumstance, needs, alternatives, you, and only you are qualified to write the eval criteria for what a good job looks like, what a “you did the job, but I’m not gonna say you did it well” job looks like, and what a “you broke my product, you broke my video” looks like. And you’re the one who is best able to really codify that.
A good example of what I mean is in edit for clarity, actually, our initial run at edit for clarity really took into account the sentence structure that you would see over here, but what it didn’t take into account is the kind of thing that anyone who is actually editing a video or really understands a video editor cares about, which is how many jump cuts per 10 seconds are you putting in my video?
Because when the answer becomes more than one or so, that’s usually way too many jump cuts, right? You’re gonna sound insane when you publish this thing, or you’re gonna look like you’re moving all over the screen. And so that’s an example of nuance in success that I think a product manager is really well inclined to think about and catch.
And if you try to delegate that to an LLM, if you try to delegate that to someone in support, if you try to delegate that to a researcher, you often aren’t gonna get that same level of deep knowledge of the customer.
I will say that for creative stuff, sometimes the product manager isn’t even enough there though, and you need a real creative person with taste. So when it came to our audio model, studio sound, the best person we ever had running the evals for that was a professional cellist who was just an audio file and knew what beautiful sound sounded like.
Aakash: Yeah, totally.
The PM’s Output in Evals (20:44)
Aakash: So we’ve been hearing from a lot of people, the PM is really important in evals, although they may not be the only person involved in evals. Like many other products, they need to bring the right experts in to look at those things. Maybe it’s an audio file in your guys’ case for audio features. What exactly is the output of a PM? You mentioned evaluation criteria because there are these evaluation platforms nowadays and observability platforms, and there’s the system prompt for the LLM judge that you’re literally using. What evaluation criteria should PMs be writing out? What good examples are, a few shot examples? What should that tangibly look like?
Laura: I don’t think that it is doing anything in a tool, personally. At least our engineering team, our research team kind of handles literally setting up the tool to do the eval. But where the role that the PM plays in setting up evals is in setting up the decision criteria for what pass, what high pass, pass and fail looks like, for different queries.
We randomly select representative queries across real production data and don’t share those with engineers, but we do share them with the PM, who then says, OK, for these kind of randomly selected queries, here’s what success looks like. And then this goes into training eventually the LLM judge.
We first have a human judge it and make sure that they really like the criteria. Then we have a human and an LLM both judge it, and we see does the LLM agree with the human or does the LLM disagree, in which case we may have a tiebreaker. Tie often goes to the human. For now, maybe I look forward to the day that I’m like, actually, Alex, I think the LLM had this one. And then eventually we delegate to the LLM.
Lessons and Failures (22:41)
Aakash: OK. And then were there any lessons in the audio tools, feature sequencing, or any failures along the way that we could learn from?
Laura: I mean, I think that, yeah, there, OK, yes, I have a couple of good failures you can learn from here. One of them is, you know, I told you that we used to have this real audiophile who was just using judgment—vibes, as we call it on the internet. And then when he left, we tried to turn that into a list of criteria.
And one of the things that we realized after getting to what I think was a worse model, is that we were not clear about what the use case was. So, I believe, given the information that I have now, and I reserve the right in AI to update my perspective every day, certainly every quarter.
Right now, I believe that there will not be one model to rule them all, but that we’re going to find that different models handle different specific use cases better and some worse. And so with studio sound, we got to what I think was a worse model for our users, because our evaluators were using sample data that was really bad audio.
Like, wow, someone is vacuuming in the room while I’m doing this podcast. And what’s going to help remove that right next to Laura person vacuuming? And it turns out that the model that is the best at making terrible sound sound good, and the model that is the best at making OK sound sound good, are different models.
And what I mean by OK sound is, hey, I don’t have a Shure microphone. What I have is a laptop, and I’m not wearing headphones, and I’m kind of just talking into my computer like this. And studio sound is best for that. And that’s the most common use case, right? That’s why people use it. It’s cause I’m recording my podcast and I don’t want to buy a $900 microphone.
And so, what we needed to do was make sure that we had sample data when we did the evals that was representative of what our actual use case that we wanted to nail was, and to care a little bit less about, wow, there’s a jackhammer right outside my window going. Does that make sense?
Aakash: Yep, 100%.
Introducing Underlord: The Agentic Editor (25:12)
Aakash: So that’s AI tools. As you previewed, Underlord, your agent is really the next frontier. This is as everybody is now trying to ship an agent into their product. So talk to us a little bit about how you guys developed this feature and what the life cycle was of getting it out into the public.
Laura: Yeah, so this is a much harder thing to build. So Underlord, just quick correction, Underlord, not overlord. Because nobody wants an AI overlord, right? But couldn’t you use an AI underlord to do your bidding? And that’s how we both positioned Underlord in the market, and that’s how we find that a lot of people want to work with an AI assistant in creative spaces.
I am the creative, and Underlord, you can be my brain buddy, and you can execute on what I say, but I’m in control and I wanna maintain creative fulfillment from building this thing. Creative fulfillment’s really important for our customers.
But so what Underlord does is, whereas an AI tool can do 28 things really well, as soon as you start to want it to do something else, you’re just SOL, you have no options. And so where this really started to come to a head was with this very popular feature we have called Create clips.
And what Create clips does is if you have a long form video, a webinar, a podcast, probably you want to create some social clips to complement that, to help you market it. This is a very common workflow. And so what you can do is say, oh, I want, you know, 3 clips. I want them to be about 10 seconds each. I want them to use, you know, they’ll be square and they’ll use this template or whatever, maybe of your own.
And, OK, maybe, and this we built this as a V2, you know, you make your clips, and then we heard people say, OK, but I want to be able to tell you what topic my clip should be about. And so we were like, OK, let’s build that in. Let’s build that into the prompt that optionally, as an input, the customer may give you a topic. Here’s what a topic is. Here’s how to think about that. You need to go and do topics.
And so then people were like, oh Laura, I love the feature, I love the feature, but just one more thing. Could you make it so that you can also say who you want to say the clip. Or, hey, could I also tell you that I want this clip to be optimized for Instagram, which is a little bit different than the way that you would optimize for YouTube shorts.
And I’m like, oh my God, are we just gonna build 30 different knobs and dials onto this clips thing until it can do every idea that a customer has, all of the input it wants to give Underlord or it wants to give an AI tool. And it’s like at that point, maybe you don’t want this parameterized button thing, maybe you want chat, right?
And I’m not someone who always thinks that chat is better. I think buttons are often better, but by the time you have 30 parameters, maybe you just wanna tell Underlord what it is you’re trying to accomplish.
Underlord’s Two Key Use Cases (28:21)
Laura: And so Underlord is good at helping you get to objectives, and it’s good at helping you do truly customized workflows. Those are the two reasons to use Underlord instead of an AI tool.
By objectives, I mean, you saw that the AI tool I had up there for edit for clarity was like, oh, OK, yeah, we’ll edit for clarity. Do you want a little bit of editing, a medium amount of editing, or a lot of editing? It’s kind of confusing, right?
What it can’t do, but what Underlord can, is something like, can you get this down to about 90 seconds? And that’s more of an objective-driven ask, where Underlord will need to do some thinking about, OK, this is a 3-minute video, I need to get this down to about 90 seconds. How do I do that in a way that maintains the integrity of the video?
And so it kind of knows how to think about that and start editing it and getting it down. Now it’s down to 2:40.7, so it’s sort of making its way through and asking, are there things I can remove at each stage of this, and I’ll keep removing until I get down to 90 seconds. Then I’m actually gonna step out and ask myself, did I do this well? So that’s an objective-driven task that you can give Underlord.
And then the other kind of thing you can tell Underlord is, I actually have 27 very specific things that I want you to do at the same time. I’m gonna write those out once. I’m gonna save them as a template, and now every time I go to edit my podcast, I’m just gonna be like, do the Akash editing rigmarole, and get me my podcast, right? So those are the two things that I think Underlord can really be magical for.
Building Underlord: Mouse vs Mammoth (30:05)
Aakash: OK, and so how did you guys roll this out and iterate and improve on this over time. Were there failures along the way?
Laura: Yes. So yeah, this is a much harder thing to build, and I’d say we are still—I think we’re still in the middle of building Underlord, because it’s pretty good now, but I think right now it’s on a harness that is still too brittle for it to be as open world and to be able to take advantage of leaps in general intelligence enough for it to be exactly what we want.
And so we call this the mouse version, and we’re building the mammoth version. There’s a company right now that’s trying to incubate a woolly mammoth, and the best they’ve done so far is a woolly mouse. And so I’m like, this is a woolly mouse. It’s cute, it’s useful, I love it, but it’s not majestic yet. And the mammoth is what we’re trying to build right now.
So what was the process to doing this? Well, first we needed to decide on the scope, and is this going to be a depth kind of agent that can do one thing with great depth, or is this going to be a breadth agent that can really work across all of the tools in Descript?
And we decided on the harder one, as we often do. One of our core principles is that we’re not a point solution, that Descript should be a video editor that can really address every kind of video, whether it’s a short, whether it’s long form horizontal, whether it’s a podcast, all of those things should be supported by Descript, and so all of those things need to be supported by Underlord.
Well, OK, so now what you need to do is you need to give your agent the level of context that it needs to understand and work with the customer, and then you need to give it the tool coverage that it needs to be able to do all of the things that you want Underlord to be able to do.
And then you need to build a report card or eval system, so that we can know where it’s good and where it’s bad, and eventually it can teach itself to get better, right? That’s the dream. And I’d say we’re on that journey right now.
[Sponsor Break: Vanta – 32:35]
Aakash: Today’s episode is brought to you by Vanta. As a founder, you’re moving fast toward product-market fit, your next round, or your first big enterprise deal. But with AI accelerating how quickly startups build and ship, security expectations are higher earlier than ever.
Getting security and compliance right can unlock growth or stall it if you wait too long. With deep integrations and automated workflows built for fast moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infra, and customers evolve.
Fast growing startups like LangChain, Writer, and Cursor trusted Vanta to build a scalable foundation from the start. So go to Vanta.com/Akash, that’s V A N T A.com/AAKASH to save $1000 and join over 10,000 ambitious companies already scaling with Vanta.
[Sponsor Break: Nya One – 33:28]
Aakash: Today’s episode is brought to you by Nya One. In tech buying, speed is survival. How fast you can get a product in front of customers decides if you will win. If it takes you 9 months to buy one piece of tech, you’re dead in the water.
Right now, financial services are under pressure to get AI live, but in a regulated industry, the roadblocks are real. 901 changes that. Their air gapped cloud agnostic sandbox lets you find, test, and validate new AI tools much faster, from months to weeks, from stuck to shift.
If you’re ready to accelerate AI adoption, check out Nya One at Nyaone.com/Akash. That’s N A Y A O N E.com/AAKASH.
[Sponsor Break: Chameleon – 34:09]
Aakash: Today’s episode is brought to you by the experimentation platform Chameleon. 9 out of 10 companies that see themselves as industry leaders and expect to grow this year say experimentation is critical to their business. But most companies still fail at it. Why? Because most experiments require too much developer involvement.
Chameleon handles experimentation differently. It enables product and growth teams to create and test prototypes in minutes with prompt-based experimentation. You describe what you want, Chameleon builds a variation of your web page, lets you target a cohort of users, choose KPIs, and runs the experiment for you.
Prompt-based experimentation makes what used to take days of developer time turn into minutes. Try prompt-based experimentation on your own web apps. Visit chameleon.com/prompt to join the waitlist. That’s K A M E L E O N.com/prompt.
Quantifying Success and Beta Testing (35:01)
Aakash: So, these are the features, they really worked. I’m curious, what can you share, by the way, about what sort of metrics are you guys seeing? Uptake, adoption of these? How did you really quantify success?
Laura: Yeah, so, we didn’t start to try to quantify success until we got to a public beta, until we released it to everyone. I think that our stages were, first we had a series of regression tests that we ran across—when we decided we wanted Underlord to be a breadth agent, we came up with a whole bunch of stuff that we thought it ought to be able to do that covered the extent of what a person might want to do in Descript.
Obviously that can’t be 100% representative, and like I said, it never will be in a world of emergence, but we have a framework for how we think about the kinds of jobs that customers do in Descript. And so, first we started with just, you know, your classic PM: customers should be able to blah blah blah blah blah. Well, we were just building tool coverage.
Then we got into our—we really rushed to get into a private alpha with, so we were building with real customers, because it’s really hard to build this stuff well as long as you’re using toy data. You need to get real data from real people who will talk to your agent, not using the language that you, a person who works at Descript does, but using the language that a person out in the world will.
And we purposefully tried to get an alpha where we had some people who were quite sophisticated at video editing, and some people who were not at all sophisticated at video editing, and some people who were quite sophisticated at AI, and some people who were not at all sophisticated at AI so that we could understand, oh, if a person is really sophisticated about both video editing and AI, this is the kind of experience they might have.
If someone is not very sophisticated about either, their prompt might just be, edit this video and make it better. And it’s like, OK, what do we do with that prompt, right?
So, once we got real customer data, then we were able to organize that customer data into a series of regression tests, to be like, OK, when people want to apply layouts, here’s 30 representative queries that they might ask. And this was pre-evals. These are just regression tests. So we showed those to engineers, and we actually just ran bug bashes on those queries for a while, and that helped us understand, OK, where is this agent getting things wrong? Where do we need to tweak the prompt or tweak the way that we would set up the tool in order to get these things to work.
Now, what anyone who builds these things will tell you is that’s great, but you’re probably overfitting to the prompts that you’re exposing to your engineers and indeed we did, but that also meant that we had something good enough that we were starting to see that our beta users really liked it, because it was working for their use cases.
So then we’re like, OK, let us try this out with new customers and see if it helps them activate. Because, as you know, video editing is hard, right? It’s hard. And the biggest problem that we have as a piece of software, as an app is that a lot of people really want to edit video when they get to Descript, and they try, and they just can’t get over the hump.
And so we’re like, OK, is the agent as it is, Underlord as it exists now, better at getting new users over the hump than the activation experience that we have right now? And it was. And so we were like, OK, then we should give this to new users.
And so for a while, we gave it to new users, and then we had it so that any user could opt in, right, if they wanted to. And so we had it in that state for a while. And then that gave us even more data and more time to start setting up our evals and start automating those, make the agent better, and eventually make it kind of the default experience for everyone.
Three Types of Underlord Users (39:35)
Aakash: Wow, and now I believe if I go to the Descript homepage, I’m gonna be seeing Underlord, that chat box as the top thing, so it’s really elevated itself to the central part of Descript’s experience.
Laura: Yeah, I’d say that right now we have a bifurcated set of customers, so there are people who don’t use Underlord at all, right? And these people often already—they’re using Descript because the editor that we have, the human editor that we have is exactly what they want, and we love those customers.
Some of them then become hybrid users, where they could do everything themselves, but they find, wow, if I need to change 17 transitions, I would really rather just tell Underlord, can you make all of the transitions wipes instead of cross fades, and Underlord is great at bulk editing like that.
Or a podcaster might say, every time Laura is monologuing for more than 30 seconds, can you just show her, but then for the rest of the time, show both her and Akash, or whatever. Whenever she’s referencing something on the screen, can you make sure that the screen shows up, otherwise never show the screen. That’s something that’s very useful that Underlord can do, even if you know how to do it, because you don’t want to do that for 45 minutes of tape.
Then you have this new group of customers that are just Underlord Native. That is how they think about Descript. And they’re willing to prompt and re-prompt 3 to 4 times, because they don’t want to learn video editing. And I know who this customer is, because this is me and how I use Cursor, right? I’m never gonna suddenly become a software engineer. I’m just gonna keep talking to this thing. Keep talking until I get the result that I want, you know.
Aakash: Love it. So taking some models from those other successful agentic products in the coding space, applying that mental use case to your own product, then seeing how to apply it to your product.
The Path from IC PM to CEO (41:35)
Aakash: So that about completes the deep dive into the features. That’s one element of making it from IC PM to CEO, but it’s not all just about your track record. A lot of it is about you. And so I wanna dive into you and I actually wanna start pre-Descript. What are the relevant things about your career history? You were a management consultant, you worked at Twitter. What were the relevant points and things that really set you up for the success you saw once you got to Descript?
Laura: Yeah, so I think, you know, we’re all PMs and we know that the pathway to product is windy and strange, and we all come—some of us come from engineering, some of us come from design. I love to shock people by telling them that I majored in German literature, which just feels, you know, to many people in tech, like the most useless thing you could possibly do with 4 years of your life. I loved it.
And when I found out I did not want to be a German literature professor though, I did what a lot of kind of smart generalists did at the time, and that was I went into management consulting. And I think that management consulting was a really, really great place to start my career, even though, oh my gosh, I did not much like the job after I kind of learned it.
I think either you’ve got consultant’s blood in you or you don’t, and I’m someone who just has too much of a desire to own and operate to be OK leaving it at the slide deck. I think some people much prefer to leave it at the slide deck. They’re like, great, I came up with a strategy, bye.
But in any case, I was a management consultant, and I think I really learned how to be strategic there. One of the things that basically your job, if you’re a junior management consultant is to sit in the room and take notes while smart, very senior people talk a lot about how they’re gonna make difficult strategic decisions. And I think that kind of just teaches you how they think, and that’s actually their talent model. So it’s a really great place to start your career.
But then I’m so thankful I was put on the weirdest project that you could be put on in a management consulting firm, which was to help spin up an online learning platform for this firm, so that they could reach people that weren’t ready or able yet to get 5 people on site co-working with them, but still wanted some of that strategic information.
And as I went to build this learning platform, I was like, first of all, I just moved to the Bay Area. I was like, here’s how I was thinking we could build this, cause I’m reading a book right now that’s telling me about this. What if I built a prototype in 2 weeks, and we just kind of showed it to people and found out what they liked and didn’t, and then adjusted it.
And I had an awesome boss who was like, yeah, you should do that. That sounds great. And in so doing, I think we built a really great product for the firm I was working at. And I realized, oh, I would like this job so much more than I like that other thing I was doing. So what is this? Can I just do this for the rest of my life? And here I am.
So I think that was really meaningful. Both the experience to get that strategic kind of, here are the kinds of questions that leaders struggle with, but then also to get that experience of, here’s what it feels like to ship and have customers, and have them tell you that they love you and they hate you, and have that drive you to feel better.
Yeah, so that was a really, really important first part of my career. Then I found startups. The second chapter is probably finding out that startup to me was where my heart went. I think one of the decisions you need to make in your career is, am I a big company PM or am I a small company PM? And you’re just gonna feel that in your gut as soon as you try both, you know, so you’ll end up at the job you end up in.
But if you start small and it doesn’t feel right, you should go to a big company next. If you start big and it doesn’t feel right, go to a small company next. I worked at Amazon, and was like, no, this is too big. I could get hit by a bus and no one would even know for 11 months because that’s how small my scope is here, even though I love the people there.
So then I worked at a Series C startup as the first on the ground PM and I was like, this, I’m addicted to this feeling of having my fingerprints on everything and building really quickly and being experimental and seeing what works and not having bureaucracy. And so I took that train as far as I could take it.
But then I kinda hit this barrier where I wasn’t learning as much as I needed to, to be as good as I needed to at the job that I had at the time, which was an org leader. By the way, org leader is a classic crisis moment in a PM’s career, right? Because probably you’re a pretty good PM largely self-taught. I was a pretty good people manager, largely self-taught, like that’s just like, I thought about it and had empathy, or I thought about it and did good work.
Once you’re an org leader, that job is really hard. There’s not a clear one right way to do it, and there’s often not a lot of people to learn from, especially at startups. So, I had to go to a big company to figure out how to be an org leader, and I’m grateful to the org leaders I learned from there at Twitter, where I was for 3 years. But after 3 years, I knew I needed to get back in the startup game.
Landing the Job at Descript (47:07)
Aakash: And that’s when you already referenced, you knocked on the door of Descript. I think there’s an interesting story in there. How did you land the job to begin with and what was the role?
Laura: Yeah, I think my advice to product managers, at least if you’re like me, is don’t be strategic about your career at all, you know, just go where your heart takes you. But that’s actually the second time in my career that I just cold called an org and said, I love your product. Give me a job, please.
And what had happened is I left Twitter. And I didn’t really know if I even wanted to be in tech anymore because this is right before ChatGPT came out. And so, sort of presciently, I was like, is anything interesting happening in tech anymore? Like, what’s going on?
I didn’t want to be in social. I didn’t want to be optimizing for clicks and likes. I was tired of that game. It pays so well. I was not interested in it. And so I did what you do when you don’t know what you’re doing with your life in the Bay Area, and I started a podcast. And that’s how I found Descript.
And it was that feeling for me, you know, of just like, wow, this is why I got into product management. And so I emailed them and I just asked if they were hiring PMs, right? I’d been a director at Twitter, but I really wasn’t optimizing for title, and I wasn’t optimizing for money. I was optimizing for learning, right?
I hadn’t been in AI before. I hadn’t been in this deep workflow software before, even though I loved being in it as a customer. And I thought that I had some skill though, in product management, in leadership, in strategy, in measurement, that I could lend to a startup that’s building out its PM bench. And so they hired me and pretty quickly I became the VP of product.
The Progression to CEO (48:58)
Aakash: Tell us a little bit more about that progression from IC to CEO. Obviously you’re the one who’s making the progression, not the one judging the results, so you’re gonna have your own point of view. But from your point of view, what were the things you were doing well that gave them increasing confidence that we should keep giving her more and more responsibility?
Laura: Yeah, I mean, I think that at a founder-led startup, a lot of it really just comes down to, are you someone who can understand the founder’s vision and help make it better and make it happen, right?
And I think that this is something—a lot of founders when they’re hiring their first Head of product don’t know totally what they’re going to get into and what it’s going to be like to share power with that person because a lot of founders are the head of product.
And I think what really worked about Descript and with me and Andrew is he didn’t hire me as a head of product and then run into all of these weird power struggles. He hired me as a PM and I think found that he liked having me in the room when he was making decisions about the product strategy.
And when you’re in the room and someone likes having you in the room, after a few months, you permanently belong in the room, right? And so I think that that’s kind of what happened is I got hired as an IC PM and then found myself invited to more and more rooms until I was invited to the room, right?
Aakash: A lot of IC PMs, they’re trying to do that, right? They’re trying to get invited to the room, but reality sets in. You know, they ship a feature where the founder gave them some latitude and it wasn’t close enough to the founder’s vision. Then the founder starts to micromanage their roadmap a little bit more and it usually spirals out of control from there. What are the tips you have for PMs in managing that founder relationship?
Managing the Founder Relationship (50:58)
Laura: Yeah, that’s a really good question. I mean, I think that, you have to ask yourself, if I’m the founder of something, why in the hell should I trust you more than I trust myself? Like, you didn’t build this, right? You didn’t sacrifice the things that I sacrificed to build the first 7 versions of this.
And so, you need to understand that what you’re coming into is someone who has gotten very far by trusting their own instincts, and you need to have the humility to understand that they probably do know their customer a lot better than you do, and they know the product better than you do.
And so you have to ask yourself then, how do I gain the credibility that this person starts to trust my instinct as well. And I think that’s by using the product, use the product, know the product, be able to speak credibly to what it feels like to use the product, talk to customers, be able to speak credibly to what existing customers want, and what the customers that you wish were using the product want, right?
And so a lot of it is just get smart, earn your way into the room by learning the hell out of the product, and learning the hell out of the customers, and learning the hell out of the business. And I think so, that’s probably not satisfying, but I think a lot of people try to skip that step, and just be really impressive, and they work really hard, but it’s like, OK, first get command.
Get enough command over the product and the customer that you earn the trust, and when you say something, someone just mentally puts in their head, smart. And then they’ll invite you into more rooms.
And then I’d feel like another mistake people make though is they’re so eager to get into the strategic rooms that they start doing a lot of strategic work, and they let this part of their job kind of fall apart, where it’s like, oh yeah, I’m doing a lot of strategic planning, but I actually don’t know what the engineers on my team are working on. I think we’re gonna have that done in Q3 and it’s just unacceptable, right?
You need to be deep in the details of the IC PM work that you’re doing and excelling at that before you’re really gonna come in and be invited to plan out what next year looks like. And so that’s the other way that you earn credibility is you ship, you ship, you gotta ship, which is why we spent the first 40 minutes of this episode on what Laura shipped.
Conclusion (53:11)
Aakash: Laura has given you guys all the master class, OK?