Vercel recently released a new open source tool called json-render that signals a step toward generative user interfaces (UI), a term Vercel coined for AI-generated web interfaces.
“What if, instead of just generating text, the LLM [large language model] could give us UI on the fly?” Guillermo Rauch, the founder and CEO of Vercel, asks The New Stack. “We’re basically plugging the AI directly into the rendering layer.”
AI and infrastructure will soon be able to support generative UI, he says. Json-render is one piece of the puzzle.
Developers can use json-render to define the AI guardrails, such as what components, actions, and data bindings the AI can use. Then, end users can describe what they want in natural language through an AI prompt. The AI then generates JSON and renders it progressively as the model responds.
Rauch calls it a “very disruptive technology” because it bypasses that step of producing the software. It’s been deployed as part of Vercel Labs, which is for Vercel’s experimental projects. It’s considered an early research prototype, but the technology is already “very sound,” he says.
One developer was able to run json-render on Quinn, a low-parameter, open source model locally deployed, Rauch says.
“If you extrapolate from here, you could imagine a world where you open a website and UI just generates itself spontaneously for you using json-render,” he says. Think about it as a piece of infrastructure.
“It enables any company to take the AI and convert it into UI, and it can be plugged into systems,” he says. “Again, it’s experimental for now, but if you wanted to embed this generative UI capability into a system, you would use json-render.”
Json-render under the hood
Json-render is a tool decades in the making, according to Rauch. He credits Vercel Software Engineer Chris Tate for his work on the tool, adding that json-render involved 10 years of thinking about the generative UI challenge, even before LLMs.
Json-render has an opinionated set of predefined components that give the AI the freedom to compose.
“We don’t want the AI to get so creative that it changes your brand guidelines, it changes your color system if it is something that doesn’t look good,” Rauch says. “The engineer will have the job of curating the brand identity, look, and feel of the system that’s being rendered.”
It can even be used to build game UI on demand, he adds. It’s model- and framework-agnostic, he adds, so it works with your JavaScript framework of choice.
Rauch envisions a web where users go to their favorite e-commerce site and it automatically reminds the user of past orders, updates the user on shipping, or offers customized product recommendations.