Josh Gans had written what I think is the first textbook of AI. Instead of the “big issues” like will AI result in the singularity or the end of the human race, Gans treats AI as a tool for improving predictions. What will better predictions do in legal markets, economic markets, political markets? He generally avoids conclusions and instead explores models of thinking.
I especially enjoyed the chapter on intellectual property rights which maps out a model for thinking about copyright in training and in production, how they interact and the net costs and benefits.
Gans’s chapter usefully pairs with Cory Doctorow’s[screed on AI](https://pluralistic.net/2025/12/05/pop-that-b…
Josh Gans had written what I think is the first textbook of AI. Instead of the “big issues” like will AI result in the singularity or the end of the human race, Gans treats AI as a tool for improving predictions. What will better predictions do in legal markets, economic markets, political markets? He generally avoids conclusions and instead explores models of thinking.
I especially enjoyed the chapter on intellectual property rights which maps out a model for thinking about copyright in training and in production, how they interact and the net costs and benefits.
Gans’s chapter usefully pairs with Cory Doctorow’sscreed on AI. It’s a great screed despite being mostly wrong. I did like this bit, however:
Creative workers who cheer on lawsuits by the big studios and labels need to remember the first rule of class warfare: things that are good for your boss are rarely what’s good for you.
…When Getty Images sues AI companies, it’s not representing the interests of photographers. Getty hates paying photographers! Getty just wants to get paid for the training run, and they want the resulting AI model to have guardrails, so it will refuse to create images that compete with Getty’s images for anyone except Getty. But Getty will absolutely use its models to bankrupt as many photographers as it possibly can.
…Demanding a new copyright just makes you a useful idiot for your boss, a human shield they can brandish in policy fights, a tissue-thin pretense of “won’t someone think of the hungry artists?…
We need to protect artists from AI predation, not just create a new way for artists to be mad about their impoverishment.
And incredibly enough, there’s a really simple way to do that. After 20+ years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That’s why the “monkey selfie” is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.
And not only has the Copyright Office taken this position, they’ve defended it vigorously in court, repeatedly winning judgments to uphold this principle.
The fact that every AI created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them, or give them away for free. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.
The US Copyright Office’s position means that the only way these companies can get a copyright is to pay humans to do creative work. This is a recipe for centaurhood. If you’re a visual artist or writer who uses prompts to come up with ideas or variations, that’s no problem, because the ultimate work comes from you. And if you’re a video editor who uses deepfakes to change the eyelines of 200 extras in a crowd-scene, then sure, those eyeballs are in the public domain, but the movie stays copyrighted.
AI should not have to pay to read books any more than a human. At the same time, making AI created works non-copyrightable is I think the right strategy at the present moment. Moreover, it’s the most practical suggestion I have heard for channeling AI in a more socially beneficial direction, something Acemoglu has discussed without much specificity.