Here is the audio, video, and transcript. Here is the episode summary:
At 22, Brendan Foody is both the youngest Conversations with Tyler guest ever and the youngest unicorn founder on record. His company Mercor hires the experts who train frontier AI models—from poets grading verse to economists building evaluation frameworks—and has become one of the fastest-growing startups in history.
Tyler and Brendan discuss why Mercor pays poets $150 an hour, why AI labs need rubrics more than raw text, whether we should enshrine the aesthetic standards of past eras rather than current ones, how quickly models are improving at economically valuable tasks, how long until AI can stump Cass Sunstein, the coming shift toward knowled…
Here is the audio, video, and transcript. Here is the episode summary:
At 22, Brendan Foody is both the youngest Conversations with Tyler guest ever and the youngest unicorn founder on record. His company Mercor hires the experts who train frontier AI models—from poets grading verse to economists building evaluation frameworks—and has become one of the fastest-growing startups in history.
Tyler and Brendan discuss why Mercor pays poets $150 an hour, why AI labs need rubrics more than raw text, whether we should enshrine the aesthetic standards of past eras rather than current ones, how quickly models are improving at economically valuable tasks, how long until AI can stump Cass Sunstein, the coming shift toward knowledge workers building RL environments instead of doing repetitive analysis, how to interview without falling for vibes, why nepotism might make a comeback as AI optimizes everyone’s cover letters, scaling the Thiel Fellowship 100,000X, what his 8th-grade donut empire taught him about driving out competition, the link between dyslexia and entrepreneurship, dining out and dating in San Francisco, Mercor’s next steps, and more.
And an excerpt:
**COWEN: **Now, I saw an ad online not too long ago from Mercor, and it said $150 an hour for a poet. Why would you pay a poet $150 an hour?
**FOODY: **That’s a phenomenal place to start. For background on what the company does — we hire all of the experts that teach the leading AI models. When one of the AI labs wants to teach their models how to be better at poetry, we’ll find some of the best poets in the world that can help to measure success via creating evals and examples of how the model should behave.
One of the reasons that we’re able to pay so well to attract the best talent is that when we have these phenomenal poets that teach the models how to do things once, they’re then able to apply those skills and that knowledge across billions of users, hence allowing us to pay $150 an hour for some of the best poets in the world.
**COWEN: **The poets grade the poetry of the models or they grade the writing? What is it they’re grading?
**FOODY: **It could be some combination depending on the project. An example might be similar to how a professor in English class would create a rubric to grade an essay or a poem that they might have for the students. We could have a poet that creates a rubric to grade how well is the model creating whatever poetry you would like, and a response that would be desirable to a given user.
**COWEN: **How do you know when you have a good poet, or a great poet?
**FOODY: **That’s so much of the challenge of it, especially with these very subjective domains in the liberal arts. So much of it is this question of taste, where you want some degree of consensus of different exceptional people believing that they’re each doing a good job, but you probably don’t want too much consensus because you also want to get all of these edge case scenarios of what are the models doing that might deviate a little bit from what the norm is.
**COWEN: **So, you want your poet graders to disagree with each other some amount.
**FOODY: **Some amount, exactly, but still a response that is conducive with what most users would want to see in their model responses.
**COWEN: **Are you ever tempted to ask the AI models, “How good are the poet graders?”
[laughter]
**FOODY: **We often are. We do a lot of this. It’s where we’ll have the humans create a rubric or some eval to measure success, and then have the models say their perspective. You actually can get a little bit of signal from that, especially if you have an expert — we have tens of thousands of people that are working on our platform at any given time. Oftentimes, there’ll be someone that is tired or not putting a lot of effort into their work, and the models are able to help us with catching that.
And:
**COWEN: **Let’s say it’s poetry. Let’s say you can get it for free, grab what you want from the known universe. What’s the data that’s going to make the models, working through your company, better at poetry?
**FOODY: **I think that it’s people that have phenomenal taste of what would users of the end products, users of these frontier models want to see. Someone that understands that when a prompt is given to the model, what is the type of response that people are going to be amazed with? How we define the characteristics of those responses is imperative.
Probably more than just poets that have spent a lot of time in school, we would want people that know how to write work that gets a lot of traction from readers, that gains broad popularity and interest, drives the impact, so to speak, in whatever dimension that we define it within poetry.
**COWEN: **But what’s the data you want concretely? Is it a tape of them sitting around a table, students come, bring their poems, the person says, “I like this one, here’s why, here’s why not.” Is it that tape or is it written reports? What’s the thing that would come in the mail when you get your wish?
**FOODY: **The best analog is a rubric. If you have some —
**COWEN: **A rubric for how to grade?
**FOODY: **A rubric for how to grade. If the poem evokes this idea that is inevitably going to come up in this prompt or is a characteristic of a really good response, we’ll reward the model a certain amount. If it says this thing, we’ll penalize the model. If it styles the response in this way, we’ll reward it. Those are the types of things, in many ways, very similar to the way that a professor might create a rubric to grade an essay or a poem.
Poetry is definitely a more difficult one because I feel like it’s very unbounded. With a lot of essays that you might grade from your students, it’s a relatively well-scoped prompt where you can probably create a rubric that’s easy to apply to all of them, versus I can only imagine in poetry classes how difficult it is to both create an accurate rubric as well as apply it. The people that are able to do that the best are certainly extremely valuable and exciting.
**COWEN: **To get all nerdy here, Immanuel Kant in his third critique, Critique of Judgment, said, in essence, taste is that which cannot be captured in a rubric. If the data you want is a rubric and taste is really important, maybe Kant was wrong, but how do I square that whole picture? Is it, by invoking taste, you’re being circular and wishing for a free lunch that comes from outside the model, in a sense?
**FOODY: **There are other kinds of data they could do if it can’t be captured in a rubric. Another kind is RLHF, where you could have the model generate two responses similar to what you might see in ChatGPT, and then have these people with a lot of taste choose which response they prefer, and do that many times until the model is able to understand their preferences. That could be one way of going about it as well.
Interesting throughout, and definitely recommended. Note the conversation was recorded in October (we have had a long queue), so a few parts of it sound slightly out of date. And here is Hollis Robbins on LLMs and poetry.