I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance
Why I Built This
My team needed to choose a framework for an upcoming app. The requirements were clear: it had to work well on mobile. Not “acceptable on mobile,” but actually good. We’re building tools for real estate agents working in the field: open houses, parking lots, spotty cellular signals. When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I started with what seemed like a reasonable comparison: Next.js (our current default when a framework is required) versus SolidStart and SvelteKit (alternatives I’d heard good things about). Three frameworks, should be straightforward. But when I built the…
I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance
Why I Built This
My team needed to choose a framework for an upcoming app. The requirements were clear: it had to work well on mobile. Not “acceptable on mobile,” but actually good. We’re building tools for real estate agents working in the field: open houses, parking lots, spotty cellular signals. When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I started with what seemed like a reasonable comparison: Next.js (our current default when a framework is required) versus SolidStart and SvelteKit (alternatives I’d heard good things about). Three frameworks, should be straightforward. But when I built the first implementations and started measuring, something became clear: the issues I was seeing with Next.js weren’t specific to Next.js. They were fundamental to React’s architecture. I wondered whether the other dominant frameworks (Angular, Vue) might have similar mobile web performance limitations.
That question changed the scope. If I was going to make a real recommendation for the team, I needed to test all the major meta-frameworks and understand the full landscape of alternatives. Three frameworks became ten. What started as a practical evaluation for work turned into something bigger: a semi-comprehensive look at what’s actually possible for mobile web performance in 2025.
This post shares what I discovered. The measurements are real, the kanban apps are identical (same features, same database, same styling), and the differences are dramatic. Marko delivers 12.6 kB raw (6.8 kB compressed). Next.js ships 497.8 kB raw (154.5 kB compressed). That’s a 39x difference in raw size that translates to real seconds on cellular networks.
If you’re interested in the theoretical implications of why framework diversity matters, I wrote about that in React Won by Default. This post focuses on the data: what I built, what I measured, and what it means for teams making similar decisions.
Key Takeaways (TL;DR)
Next-Gen Frameworks Deliver Instant Performance: Marko (39ms), SolidStart (35ms), SvelteKit (38ms), and Nuxt (38ms) all achieve essentially instant First Contentful Paint in the 35-39ms range. This is 12 to 13 times faster than Next.js at 467ms. The 4ms spread between fastest and slowest is statistically measurable but perceptually meaningless to users. All next-gen frameworks feel instant. The real performance story isn’t splitting hairs over 3ms differences, it’s the massive gap between next-gen and React/Angular.
Bundle Size Champion: Marko delivers 88.8 kB raw (28.8 kB compressed) for the board page, 6.36 times smaller than Next.js’s 564.9 kB raw (176.3 kB compressed). This is 44% smaller than the next closest competitor (SolidStart at 41.5 kB compressed), making Marko the clear choice when bundle size is the absolute top priority.
Resumability Pattern: Qwik City at 114.8 kB raw (58.4 kB compressed) eliminates traditional hydration via resumability, yielding instant interactivity for larger client-side apps. Different architectural approach that solves different problems.
Nuxt Proves Established Frameworks Can Compete: At 224.9 kB raw (72.3 kB compressed) with 38ms FCP, Nuxt demonstrates that established “big three” frameworks can achieve next-gen performance when properly configured. Vue’s architecture allows competitive mobile web performance while maintaining a mature ecosystem. React and Angular show no path to similar results.
Full Bundle Size Range: Raw bundle sizes span from 88.8 kB (Marko) to 666.5 kB (Analog) for the board page, with Next.js at 564.9 kB serving as the React baseline. Next-gen frameworks range from 28.8 kB to 72.3 kB compressed, all dramatically smaller than React (176.3 kB) or Angular (203.4 kB). See “Bundle Size Reality Check” section for full comparison.
Critical scaling difference: MPA frameworks (Marko, HTMX) ship minimal JavaScript per page, staying lean as you add features. SPA frameworks ship routing and framework runtime upfront, with higher baselines even using code splitting. Marko delivers around 12.6 to 88.8 kB raw regardless of total routes. SPAs maintain 85.9 to 666.5 kB raw baselines plus route chunks.
The key finding? The dominant frameworks show dramatically different results. React has an unavoidable performance ceiling. TanStack Start achieves 373.6 kB raw (118.2 kB compressed) bundles using React 19, only 1.51 times better than Next.js’s 564.9 kB raw (176.3 kB compressed). Angular ships similarly heavy bundles via Analog at 666.5 kB raw (203.4 kB compressed). But Vue (via Nuxt) proved different, achieving competitive 224.9 kB raw (72.3 kB compressed) bundles with instant 38ms FCP that matches next-gen frameworks. Meanwhile, next-gen frameworks like SolidStart deliver 128.6 kB raw (41.5 kB compressed) bundles with equally instant 35ms FCP, 4.39 times smaller than Next.js and 2.91 times smaller than TanStack Start with React. The perfect controlled comparison: TanStack Start with React (373.6 kB raw) versus TanStack Start with Solid (182.6 kB raw). Same meta-framework, same patterns, but React bundles are 2x the size of Solid, isolating React’s runtime cost.
Mobile is the web. These measurements matter because mobile web is the primary internet for billions of people. If your app is accessible via URL, people will use it on phones with cellular connections. Optimizing for desktop and hoping mobile is good enough is backwards. The web is mobile. Build for that reality.
Each build uses the same database, features, and UI so the comparison stays fair. The differences in raw bundle size for the board page range from 4.39 to 6.36 times smaller compared to Next.js for modern alternatives. Important: These measurements represent disciplined baseline implementations with minimal dependencies. Real production apps typically ship 5 to 10 times more JavaScript from analytics, authentication, feature flags, and third party libraries, meaning the framework differences compound significantly in practice. On mobile devices with cellular connections, this matters enormously.
Before diving in, a reminder from my Progressive Complexity Manifesto: The frameworks compared here represent Level 5 complexity. They are powerful tools for when you need unified client-side state, a lot of reactivity, and/or client-side navigation. But most applications thrive at lower levels. For instance, Level 3 (server-rendered HTML enhanced with HTMX and vanilla JavaScript, as demonstrated in the kanban-htmx app in this repo) can handle complex interactive applications with minimal JavaScript. Level 4 adds occasional Web Components using Lit for reusable elements. These simpler approaches often deliver even smaller bundles and much simpler codebases. This post focuses on Level 5 options for cases that demand them, while remembering simpler paths often suffice.
Why Mobile Web Performance Matters
For this evaluation, mobile performance wasn’t just a nice to have. It was the primary constraint. Our users are real estate agents working in the field: open houses with 30 people hammering the same cell tower, parking lots between showings, anywhere but a desk with WiFi. They need tools that work instantly, not “eventually load.”
We’re not building a native app. We’re building for the web, which means if it has a URL, people will access it on their phones. And for our users, the app could be used on a phone just as frequently as a desktop.
This reality shaped the evaluation. I couldn’t just pick a framework that “works on mobile.” I needed something that genuinely performs well on cellular connections with mid-tier devices. The difference between a framework shipping 30 kB versus 170 kB isn’t academic. It’s the difference between an app that feels professional and one that makes our users look bad in front of clients.
The business cost of slow performance: Research from Tammy Everts at SpeedCurve reveals something surprising. While site downtime causes 9% permanent user abandonment, slow performance causes 28% permanent abandonment. That’s over 3x worse. Even more revealing: slowdowns occur 10x more frequently than outages, resulting in roughly 2x total revenue impact despite lower per-hour costs. Beyond the abandonment numbers, slow performance creates a psychological effect where users start perceiving your entire brand negatively. Content seems “boring,” design looks “tacky,” even when those elements haven’t changed. Slowness poisons everything. These aren’t abstract metrics. They’re measurable business costs that compound with every framework kilobyte you ship to mobile users.
The real-world cost: A 113 kB difference at 3G speeds (750 kbps) means 1.2 seconds for download plus 500ms to 1s for parse/execution on mobile CPUs. Total: 1.5 to 2 seconds slower between frameworks. On 4G the gap shrinks but remains noticeable. On spotty connections (like an open house with 30 people hammering the same cell tower) it becomes painful.
“But it’s cached!” This objection misses reality. Cache busting is standard. Every deployment means users download again. First impressions matter. So do second, third, and tenth impressions. Your users remember waiting.
This is why I expanded the evaluation beyond the initial three frameworks. I needed to understand what’s actually possible. When someone pulls up the app in a parking lot between showings, every second counts. Building for mobile performance first means desktop on WiFi is excellent by default. The reverse isn’t true. Optimize for desktop and mobile users suffer.
I discovered the difference between frameworks reflects fundamentally different engineering priorities. Some frameworks prioritize runtime flexibility, shipping extensive abstractions to support wide use cases. Others prioritize runtime size and mobile performance from the ground up. The bundle sizes I measured for the board page varied by up to 7x (from 28.8 kB compressed to 203.4 kB compressed), differences that matter enormously on cellular networks.
The Experiment Setup
I built a Kanban board application ten times, once in each of these frameworks: Next.js 16 (React 19 with built-in compiler) representing React’s Virtual DOM approach with automatic optimization, TanStack Start (also React 19) for a leaner React meta-framework without App Router overhead, TanStack Start + Solid (SolidJS 1.9) using the same meta-framework with fine-grained reactivity, Nuxt 4 (Vue 3) for Vue’s reactive refs with SSR-first developer experience, Analog (Angular 20) using Angular’s modern signals API with meta-framework tooling, Marko (@marko/run) for streaming SSR with fine-grained reactivity, SolidStart (SolidJS 1.9) for native Solid integration with fine-grained reactivity through signals, SvelteKit (Svelte 5) for fine-grained reactivity with runes, Qwik City for resumability instead of hydration, and Astro + HTMX for a traditional MPA approach.
Each implementation includes the exact same features: board creation and listing pages, four fixed lists per board (Todo, In Progress, QA, Done), full CRUD operations for cards, drag-and-drop card reordering within lists and movement between lists, assignee assignment from a static user list, tag management, comments on cards with authorship tracking, completion status toggles, optimistic UI updates for drag-and-drop and chart changes (HTMX lacks this though), and server-side form validation using Valibot.
All ten apps share the same foundation. The database is SQLite with Drizzle ORM using an identical schema across all implementations. Styling comes from Tailwind CSS plus DaisyUI to keep the UI consistent. Each framework implementation contains roughly 17 components. Most importantly, every app performs real database queries against relational data (boards → lists → cards → tags/comments/users) rather than working with hardcoded arrays.
You can check out the code here.
A critical choice about dependencies: These apps intentionally minimize dependencies compared to what many developers typically reach for. For mobile web applications, every dependency represents a choice to ship additional kilobytes to users. I used necessary UI libraries like drag-and-drop packages (which vary by ecosystem), but deliberately avoided data fetching libraries, state management helpers, and other utilities that frameworks already handle natively. Each ecosystem has popular packages that add convenience but increase bundle size (React developers often reach for tanstack-query for data fetching, state management libraries, or form helpers). To illustrate the trade-off: tanstack-query alone weighs approximately 13 kB gzipped. That single dependency is already larger than Marko’s entire homepage bundle at 6.8 kB. By avoiding these “nice to have” dependencies and using each framework’s built-in capabilities instead, the bundle differences you’ll see reflect framework architectural choices, not different amounts of functionality or third-party helpers.
Measurement Methodology: All bundle sizes in this comparison represent median values from 10 measurement runs with browser cache cleared between each run to ensure cold-load performance measurements. Server warmup requests and IQR outlier removal ensure robust statistics. I report both raw (uncompressed) JavaScript sizes and compressed transfer sizes. The raw size reflects actual code volume generated by each framework and is more consistent for comparison since it doesn’t vary by server compression settings. The compressed size shows what users actually download over the network. See the complete measurement methodology for details on statistical approach, test conditions, and limitations.
Here’s how the tech stacks compare:
| Category | Next.js | TanStack Start | TanStack Start + Solid | Nuxt | Analog | Marko | SolidStart | SvelteKit | Qwik | Astro + HTMX |
|---|---|---|---|---|---|---|---|---|---|---|
| Framework | Next.js 16 (App Router) | TanStack Start 1.133.8 (w/React) | TanStack Start 1.133.8 | Nuxt 4 | Analog (Angular) | @marko/run 0.8 | SolidStart 1.1.0 | SvelteKit + Svelte 5 | Qwik City | Astro 5 + HTMX |
| UI Library | React 19 + Compiler | React 19 + Compiler | SolidJS 1.9 | Vue 3 | Angular 20 | Marko 6 | SolidJS 1.9 | Svelte 5 | Qwik | HTMX (server-driven) |
| Reactivity Model | Virtual DOM + Compiler | Virtual DOM + Compiler | Signals (fine-grained) | Reactive refs | Signals (zoneless) | Signals (fine-grained) | Signals (fine-grained) | Runes (fine-grained) | Signals + Resumability | Server-driven (HTMX) |
| Data Fetching | Server Components | TanStack Router loaders | TanStack Router loaders | useAsyncData / useFetch | injectLoad + DI | Route data handlers | createAsync with cache | Remote functions (query) | routeLoader$ | Route handlers |
| Mutations | Server Actions | Server functions (RPC) | Server functions (RPC) | API routes (server/api/*) | ApiService + RxJS | POST handlers | Server functions | Remote functions (form/command) | Server actions | API routes + HTMX |
| Database | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 | Drizzle ORM + better-sqlite3 |
| Styling | DaisyUI | DaisyUI | DaisyUI | DaisyUI | DaisyUI | DaisyUI | DaisyUI | DaisyUI | DaisyUI | DaisyUI |
| Drag & Drop | @dnd-kit/core, @dnd-kit/sortable | @dnd-kit/core, @dnd-kit/sortable | @thisbeyond/solid-dnd | @formkit/drag-and-drop | @angular/cdk | @formkit/drag-and-drop | @thisbeyond/solid-dnd | Native HTML5 | Native HTML5 | @formkit/drag-and-drop |
| Build Tool | Turbopack | Vite | Vite | Vite | Vite + Angular | Vite | Vinxi | Vite | Vite + Qwik optimizer | Vite |
This isn’t a todo list with hardcoded arrays. It’s a real app with database persistence, complex state management, and the kind of interactions you’d actually build for a real product.
Framework Architectures at a Glance
To avoid repetition throughout this post, here are the key architectural approaches for each framework tested:
React-based (Next.js, TanStack Start + React): Use Virtual DOM reconciliation where components re-render and React diffs changes before updating the DOM. React’s compiler automatically optimizes components through memoization using a Control Flow Graph-based High-Level Intermediate Representation, reducing manual optimization needs but not bundle size. Next.js employs React Server Components (RSC) which serialize component trees into a special RSC Payload format, adding meta-framework overhead. TanStack Start uses traditional SSR without RSC complexity. Both ship React’s runtime including Virtual DOM reconciler, synthetic event system, and platform abstractions, creating unavoidable baseline costs for mobile users.
Solid-based (SolidStart, TanStack Start + Solid): Fine-grained reactivity via signals with read/write segregation where getters are separate from setters. JSX syntax similar to React, but signals automatically track dependencies, eliminating manual dependency arrays and rules of hooks. Components run once during initial render; subsequent updates happen directly at the reactive primitive level without re-executing component functions, minimizing CPU overhead on mobile devices. TanStack Start provides more feature-rich routing which causes slightly larger bundles compared to SolidStart’s leaner integration.
SvelteKit: Compile-time optimization that transforms components into imperative DOM updates, with minimal runtime overhead since the compiler does most work at build time. Runes ($state, $derived, $effect) powered by signals enable fine-grained reactivity, with universal reactive primitives that work in .js/.ts files beyond just .svelte components. The compiler converts developer-written code into lean, optimized production code. This approach generates JavaScript with smaller bundles through aggressive tree-shaking, helping mobile performance on both network transfer and parse time.
Nuxt (Vue): Reactive refs with .value access for reactivity tracking. Uses aggressive optimization including compile cache for faster cold starts and reactive keys for intelligent data fetching. In Vue 3 the reactivity system has been refactored for improved performance and memory efficiency, critical for mobile devices. Vapor Mode (experimental, not used here) offers a compile-first approach that bypasses Virtual DOM entirely, compiling templates directly to native DOM operations with significantly smaller runtime overhead. Despite being a “big three” framework, Nuxt achieves competitive bundle sizes and exceptional runtime performance, with support for mixed component trees combining different rendering strategies.
Analog (Angular): Modern signals API provides fine-grained reactivity through primitives like signal, effect, linkedSignal, queries, and inputs. Zoneless mode enables removing zone.js from bundles entirely, eliminating its synchronization overhead which improves mobile CPU efficiency. Uses dependency injection patterns and ships with RxJS for enterprise reactive patterns, creating heavier bundles despite signals-based reactivity. Angular remains a “batteries-included” framework where common functionality is built-in rather than requiring third-party libraries. Incremental hydration reduces time-to-interactive by hydrating components progressively rather than all at once.
Marko: Streaming SSR with fine-grained reactivity powered by compiler analysis. The compiler analyzes the reactivity graph at build time and compiles away components themselves, shipping only the minimal code needed for events and effects, achieving zero component overhead at runtime. Statically analyzes which components are stateful versus server-only, breaking pages into top-level stateful components and selectively serializing only needed data. HTML-first syntax with automatic dependency tracking eliminates boilerplate. Supports streaming asynchronous SSR with selective hydration where only interactive parts ship JavaScript to the client, critical for minimizing mobile bundle size.
Qwik City: Resumability architecture that serializes application state and component boundaries directly into HTML during server rendering, allowing the client to “resume” execution without traditional hydration that requires re-executing components to attach event listeners. Employs fine-grained lazy loading down to the component level, deferring JavaScript downloads until actual user interaction occurs. Event handlers, component logic, and complex interactions are delivered lazily on-demand, eliminating bulk initial JavaScript execution that burdens mobile CPUs. Optimized for edge platforms with distributed deployment, delivering sub-second load times on mobile networks. Best suited for complex client-heavy applications requiring instant interactivity.
Astro + HTMX: Multi-page architecture (MPA) where Astro serves as a simple HTML renderer with no client-side JavaScript framework. HTMX handles all interactivity through declarative HTML attributes that trigger server requests and swap HTML fragments into the page. Instead of client-side state management, interactions send requests to the server which returns HTML snippets that HTMX injects into the DOM. This approach ships minimal JavaScript (just the HTMX library) and keeps pages lean as routes increase. Best suited for applications where server round trips are acceptable and client-side reactivity isn’t critical. Trades rich client-side state management for extreme simplicity and tiny bundles, optimal for form-driven or content-heavy applications.
TanStack Start: Meta-framework with a client-first architecture philosophy (versus server-first approaches like Next.js RSC), maintaining powerful server capabilities while prioritizing client-side routing and state management. Router-centric design where the majority of framework functionality comes from TanStack Router, which is framework-agnostic and supports React and Solid. Provides isomorphic loaders that work on both server and client, streaming SSR for progressive rendering, and server functions for type-safe RPC. React version ships traditional hydration with React’s baseline costs, while Solid version achieves roughly half the bundle size using identical routing infrastructure, demonstrating how UI library choice impacts mobile performance.
Fairness Check: Pinned versions, identical data volume on Board page, normalized CSS/icon handling (treeshake/purge). All measurements use Chrome Lighthouse with mobile emulation (Pixel 5, 4G throttling, 1x CPU). The measurement script uses 1x CPU to isolate bundle size impact from CPU performance variance. Cache is cleared between each measurement run to simulate first-visit experience.
Why I Expanded from Three to Ten Frameworks
When I started this evaluation, I expected to compare Next.js, SolidStart, and SvelteKit, then make a recommendation. But building those first three implementations revealed something I hadn’t anticipated: the performance issues I saw in Next.js weren’t just a React problem. They were likely systemic across the “big three” dominant framework ecosystems (React, Angular, Vue).
React (via Next.js) ships 154.5 to 176.3 kB compressed (486.1 to 564.9 kB raw) with poor runtime performance at 467ms FCP. Angular (via Analog) ships 125.4 to 203.4 kB compressed (430.3 to 666.5 kB raw). Both suffer from heavy baseline bundles that create performance costs for mobile users. Vue (via Nuxt) tells a dramatically different story. Nuxt ships competitive bundle sizes at 72.3 kB compressed (224.9 kB raw) AND achieves exceptional 38ms FCP, making it faster than all React and Angular options and competitive with next-gen frameworks like SvelteKit (38ms, tied) and Marko (39ms). This puts Nuxt in a unique position: it’s the only “big three” meta-framework that competes on mobile web performance. React requires architectural changes to achieve similar results. Angular has no clear path forward. Nuxt proved that with proper optimization, even established frameworks can deliver next-gen performance.
React’s explicit strategy: React Native for mobile. In practice, React’s web runtime trades bundle size for other goals. Many teams pursuing top-tier mobile performance choose React Native. The architectural choices that make React heavy on the web are deliberate. They solve real problems for desktop and app development. But for mobile web, React’s position is essentially: use React Native instead.
This is a strategic business decision, not a technical oversight. Facebook (Meta) doesn’t build heavy React web apps on mobile. They invest heavily in React Native and native apps. When you use their mobile app, you’re not running a web browser rendering a React SPA. You’re running native code. React Native is their solution for mobile performance. The React web framework can be expensive because the assumption is that if you care about mobile, you should use a different tool.
The problem with this strategy is that it abandons the open web. React Native requires building two separate applications. Your company needs React engineers for the web, different engineers or different skill sets for native mobile development, and App Store difficulties on top.
This isn’t just an inconvenience. It’s technofeudalism. React Native solves the mobile performance problem, but it does so by pushing developers out of the open web and into app store platforms where Apple and Google extract up to 30% of transactions, control distribution, and can revoke access at will. React’s mobile strategy inherently drives teams toward platform capture. The web offers an alternative: no gatekeepers, no platform fees, direct distribution. (I explore this dynamic in depth in “The Web is the Last Commons” section below, building on economist Yanis Varoufakis’s analysis of how app stores operate as digital fiefdoms rather than competitive markets.)
Other frameworks make a different bet: the web should work well on mobile without requiring a parallel native technology. The teams behind Marko, Solid, Svelte, Qwik, and Vue have done phenomenal engineering work rethinking these fundamentals from first principles. They’ve built innovative solutions that optimize for the web as a first-class platform for mobile. They’re all saying: you shouldn’t need a completely different technology stack just to reach people with phones. The web should be competitive on its own.
React’s choice is coherent within their ecosystem strategy. It makes sense given their investment in React Native. But it’s not neutral. It’s a choice that deprioritizes mobile web performance in favor of extensive runtime abstractions. For teams building mobile first web applications, it’s a choice that works against you.
That’s why I expanded the evaluation to ten frameworks. If I was going to make an honest recommendation for the team, I needed to understand what’s actually possible. React’s heavy bundle sizes aren’t bugs or poor engineering. They’re the predictable cost of React’s runtime architectural overhead. Angular has similar bundle size issues. Vue showed that the “big three” can compete on mobile web performance when properly configured. For teams building mobile first web applications without the resources for React Native, React and Angular create unavoidable performance limitations, but Nuxt offers a viable path forward.
The measurements that follow show exactly what that tradeoff looks like in practice. They also show what happens when frameworks prioritize mobile web performance from the start. Marko at 6.8 kB compressed. Solid at 30.6 kB compressed. Svelte at 47.8 kB compressed. These aren’t just smaller numbers. They’re fundamentally different architectural approaches that treat the web as a first class platform for mobile.
Bundle Size Reality Check
The Numbers (Versions used)
Production builds measured showing raw JavaScript size (with compressed/gzipped transfer size in parentheses). Raw size reflects actual code volume and is more consistent for comparison. Compressed size shows what users download over the network.
Framework versions tested: Next.js 16.0.0-beta.0 (React 19.2.0), TanStack Start 1.133.8 (React 19.2.0), Nuxt 4.1.2 (Vue 3.5.22), Analog (Angular core 20.3.3), Marko 6.0.85 with @marko/run 0.8.1, SolidStart (@solidjs/start 1.2.0, solid-js 1.9.9), SvelteKit 2.43.6 (Svelte 5), Qwik City 1.16.1 (Qwik 1.16.1), Astro 5.14.5 + HTMX.
These are minimal baseline implementations. Typical production apps include authentication, analytics, feature flags, form libraries, and other dependencies that multiply these numbers significantly. The framework overhead shown here compounds with every additional dependency.
Table ordered by board page size (smallest first):
| Framework | Board Page Raw (Compressed) | Homepage Raw (Compressed) | Difference from Next.js (Board Page) |
|---|---|---|---|
| Marko | 88.8 kB (28.8 kB) | 12.6 kB (6.8 kB) | 6.36x smaller |
| Qwik City | 114.8 kB (58.4 kB) | 88.5 kB (43.6 kB) | 4.92x smaller |
| SvelteKit | 125.2 kB (54.1 kB) | 103.4 kB (47.8 kB) | 4.51x smaller |
| Astro + HTMX | 127.3 kB (34.3 kB) | 88.9 kB (22.0 kB) | 4.44x smaller |
| SolidStart | 128.6 kB (41.5 kB) | 85.9 kB (30.6 kB) | 4.39x smaller |
| TanStack Start + Solid | 182.6 kB (60.4 kB) | 153.0 kB (52.0 kB) | 3.09x smaller |
| Nuxt | 224.9 kB (72.3 kB) | 224.9 kB (72.3 kB) | 2.51x smaller |
| TanStack Start | 373.6 kB (118.2 kB) | 316.8 kB (100.7 kB) | 1.51x smaller |
| Next.js 16 | 564.9 kB (176.3 kB) | 497.8 kB (154.5 kB) | Baseline |
| Analog | 666.5 kB (203.4 kB) | 430.3 kB (125.4 kB) | 1.18x larger |
Field data validation: The Chrome User Experience Report (CrUX) provides real-world Core Web Vitals data from millions of actual websites using these frameworks on mobile devices. This field data complements the controlled measurements in this post. Important caveat: CrUX data reflects how these frameworks are used in production by average developers, not optimal implementations. If a framework shows poorly in CrUX but well in these tests, it demonstrates what’s possible with proper configuration, performance tuning, and dependency discipline. The gap between field data and optimized implementations reveals opportunity for improvement in real-world usage patterns.
Key Insights:
Marko delivers the smallest bundle at just 12.6 kB raw (6.8 kB compressed) for the homepage, which is 39.5 times smaller than Next.js by raw size. SolidStart at 85.9 kB raw (30.6 kB compressed) is 5.79 times smaller than Next.js. Astro with HTMX follows at 88.9 kB raw (22.0 kB compressed), 5.60 times smaller than Next.js. Qwik at 88.5 kB raw (43.6 kB compressed) is 5.62 times smaller. SvelteKit at 103.4 kB raw (47.8 kB compressed) is 4.81 times smaller. TanStack Start with Solid lands at 153.0 to 182.6 kB raw (52.0 to 60.4 kB compressed), sitting between SvelteKit and Nuxt. Nuxt at 224.9 kB raw (72.3 kB compressed) achieves 2.51 times smaller bundles than Next.js. Importantly, Nuxt achieves elite runtime performance at 38ms FCP, tied with SvelteKit and close behind SolidStart’s fastest 35ms FCP. The controlled comparison: using identical TanStack Start infrastructure, React bundles are 2x the size of Solid bundles, isolating React’s runtime overhead. TanStack Start with React at 316.8 to 373.6 kB raw (100.7 to 118.2 kB compressed) is only 1.51 times smaller than Next.js despite dropping App Router, demonstrating React’s architectural overhead (detailed in “React’s Ceiling in Practice” below). Next.js 16’s React Compiler handles re-render optimization but cannot eliminate core runtime costs. The architectural alternatives (Marko’s streaming SSR, Svelte’s compiler, Solid’s signals, Qwik’s resumability) yield 4.39 to 39.5 times improvements in raw bundle size.
Remember that real estate agent from the introduction, pulling up your app at an open house? The difference between Marko’s 88.8 kB raw (28.8 kB compressed) and Next.js’s 564.9 kB raw (176.3 kB compressed) translates to roughly 1.5 seconds of them staring at a loading screen on cellular while potential buyers wait. These seconds are the baseline. Time waiting to load increases with every feature and every dependency added. Those aren’t just abstract kilobytes. That’s their time, their patience, and ultimately their impression of your product.
Critical scaling consideration: These bundle sizes represent a mid-complexity app with multiple routes. MPA frameworks like Marko ship minimal JavaScript per page (6.8 to 28.8 kB compressed per route), staying lean as you add features. SPA frameworks ship routing and framework runtime upfront. Even with code splitting, SPAs maintain higher baselines: Solid/Svelte start at 30.6 to 54.1 kB compressed then add route chunks, while React/Vue/Angular start at 72.3 to 203.4 kB compressed. The architectural model creates different scaling characteristics.
Important context on HTMX: The Astro + HTMX implementation achieves excellent bundle sizes with the simplest codebase, but sacrifices client-side reactivity for server-driven interactions. HTMX excels for simpler, form-driven applications where most interactions trigger server requests. However, as your app’s need for rich client-side state management grows, HTMX becomes less practical. For reactive applications, Marko (6.8 to 28.8 kB compressed), Solid (30.6 to 41.5 kB compressed), and Svelte (47.8 to 54.1 kB compressed) maintain small bundles while delivering rich reactivity.
React’s Ceiling in Practice (TanStack vs Next)
TanStack Start achieves 100.7 to 118.2 kB compressed bundles (316.8 to 373.6 kB raw) while Next.js ships 154.5 to 176.3 kB compressed (497.8 to 564.9 kB raw) in this measurement. Both use React 19. That’s only a 33 to 35% improvement, primarily reflecting App Router + RSC and related runtime.
The answer reveals that React’s runtime architecture is the primary cost, not just Next.js’s meta-framework choices.
What’s the difference? Next.js ships the full React Server Components runtime plus serialization layers, component boundary management, caching infrastructure, App Router with all its routing features, progressive enhancement for Server Actions, image optimization, and middleware. TanStack Start strips most of that out: traditional SSR without RSC, leaner routing, and simple RPC-style server functions.
Both use server-side rendering, but Next.js’s RSC model adds substantial overhead. Server Components render on the server only, Client Components get marked with "use client", the server serializes everything to a special format, and the client needs runtime code to deserialize and coordinate those boundaries. TanStack Start uses the simpler traditional SSR approach: render on server, ship HTML, hydrate everything on the client. No serialization, no boundary coordination.
In this measurement, Next.js’s App Router + RSC adds roughly 53 to 58 kB compressed. The remaining 100.7 to 118.2 kB compressed (316.8 to 373.6 kB raw) is React’s core runtime cost: reconciliation, event system, and hydration baseline.
Compare that to alternatives. SolidStart delivers 30.6 to 41.5 kB compressed (85.9 to 128.6 kB raw) using JSX, 2.91x smaller than TanStack Start with React. SvelteKit achieves 47.8 to 54.1 kB compressed (103.4 to 125.2 kB raw), which is 1.97x to 2.47x smaller than TanStack Start. Qwik delivers 43.6 to 58.4 kB compressed (88.5 to 114.8 kB raw), which is 1.72x to 2.31x smaller.
For React teams, the path forward isn’t straightforward. TanStack Start proves you can remove Next.js’s overhead, but you’re still carrying React’s 100.7 to 118.2 kB compressed (316.8 to 373.6 kB raw) baseline. SolidStart offers similar JSX syntax with 2.91x smaller bundles. And if you like TanStack Start’s approach, you can use it with Solid for the same routing patterns with dramatically smaller bundles.
Here’s the bottom line: React’s architecture (not just the Virtual DOM, but also synthetic events, platform patching, and sheer feature complexity) creates unavoidable performance costs that no meta-framework optimization can eliminate. To be fair, Virtual DOM implementations can be small (see Preact at 4 kB). React’s size reflects deliberate choices to circumvent platform constraints and provide extensive features. TanStack Start proves this: removing App Router overhead yields only a 33 to 35% improvement. To escape this ceiling and achieve 3 to 4 times smaller bundles, you need a fundamentally different architectural approach. Frameworks that lean into the platform instead of circumventing it can deliver dramatic size reductions. The React team chose to accept these costs to solve other problems (Server Components, unified patterns). That’s a legitimate choice. But it’s not negotiable within React.
TanStack Start: React vs Solid
Here’s where it gets interesting. TanStack Start is a new meta-framework that currently supports both React and Solid. Using the same meta-framework with two different UI libraries gives us the perfect controlled comparison.
TanStack Start with React: Ships 373.6 kB raw (118.2 kB compressed) compared to Next.js’s 564.9 kB raw (176.3 kB compressed). That’s 34% smaller by raw size. If you’re stuck maintaining an existing Next.js codebase, TanStack Start offers a legitimate escape path from App Router complexity while staying in React. But that’s still 373.6 kB raw (118.2 kB compressed) of React’s core runtime.
TanStack Start with Solid: Delivers 182.6 kB raw (60.4 kB compressed). That’s 30% larger than SolidStart’s 128.6 kB raw (41.5 kB compressed), but still dramatically better than any React option. The size difference is largely due to TanStack Router having more features than SolidStart’s Router. This buys you additional routing capabilities and framework flexibility.
The controlled comparison that matters: React at 373.6 kB raw (118.2 kB compressed) versus Solid at 182.6 kB raw (60.4 kB compressed) using identical TanStack Start infrastructure. Same routing, same SSR approach, same patterns. React bundles are 2x the size of Solid. This isolates React’s runtime cost versus Solid’s architecture. No meta-framework differences, no excuses.
All four implementations achieve perfect 100 Lighthouse scores. Bundle size differences are real, but modern devices handle them without impacting perceived performance in this test.
For greenfield projects? Don’t choose React. TanStack Start with Solid gives you 182.6 kB raw (60.4 kB compressed) bundles, but native SolidStart delivers 128.6 kB raw (41.5 kB compressed) with tighter integration. If you want the absolute smallest with this architecture, go SolidStart. If you like TanStack Start’s patterns and might want framework flexibility later, TanStack Start with Solid is reasonable. But starting a new project with React (whether Next.js or TanStack Start) means voluntarily accepting 2x to 3x larger bundles for no performance gain.
The Verdict: What I’m Recommending
After building ten implementations (with help, of course; see the acknowledgements below) and measuring everything, the data gives clear direction. For our mobile first requirements, here’s what I found:
The next-gen frameworks all achieve essentially instant performance. The 35-39ms FCP range feels perceptually identical to users, and it’s 12 to 13 times faster than Next.js at 467ms. Since all next-gen frameworks feel equally fast, choose based on bundle size priorities and developer experience rather than microscopic FCP differences.
That said, context matters. Not every project can or should switch frameworks.
When Next.js still makes sense: For large existing React codebases, migration costs may outweigh performance benefits. If you’re stuck with React and can’t migrate, consider TanStack Start over Next.js for a 21-31% bundle reduction without App Router complexity. That’s a practical business decision. But for greenfield projects? There’s no legacy to maintain, no migration costs to weigh. You’re choosing to build on a foundation that costs your users 2x to 3x more JavaScript on every visit. You’re voluntarily accepting worse performance when better options cost nothing extra. That’s not a neutral technical choice. “We only know React” isn’t a technical constraint, it’s a learning investment decision. And “organizational politics” is real, but it’s not a technical justification. It’s an admission that better options exist but can’t be chosen.
Reality check on common objections:
“But hiring!” Competent developers learn frameworks. That’s the job. These alternatives are actually easier to learn than React: no rules of hooks, no dependency arrays, no manual memoization dance. The real difficulty isn’t learning curve, it’s creating a engineering culture that acknowledges constraints and makes intentional decisions with these constraints in mind.
“But ecosystem!” React’s ecosystem is both advantage and liability. Large libraries ship code for scenarios you’ll never encounter. That date picker with every locale? You need 3 features, you’re shipping 300. For mobile-first projects where every kilobyte matters, this becomes a problem. Modern AI tools make building exactly what you need feasible: generate the function instead of importing 50kB for 3 features. Smaller bundles, code you understand.
“But it’s risky!” Shipping 3x larger bundles to mobile users on cellular is the actual risk. Slow loads damage your brand and cost conversions. The “safe choice” has measurable costs.
“But my users are desktop-only!” Let’s be honest: “desktop-only” is usually an excuse to skip performance discipline entirely. And it’s rarely true for long. Six months later someone asks “can I check this on my phone?” and suddenly you’re stuck. Better to build it right from the start. Desktop users still benefit from faster parsing and execution. Even on WiFi, 30.6 kB compressed loads noticeably faster than 176.3 kB compressed. More importantly, why would you voluntarily accept 3x worse performance when the better option costs nothing extra? Performance is a feature regardless of screen size. Building with constraints makes you a better engineer. “Desktop-only” shouldn’t mean “no discipline.”
Why you should seriously consider the alternatives: The mental models are often simpler (see Framework Architectures section). Alternatives like Solid, Svelte, and Marko streamline patterns with automatic reactivity. Performance comes by default with 2x to 6x smaller bundles requiring no optimization work. Mobile web matters with real users on phones, cellular connections, and mid-tier devices. You’ll write less code, ship less JavaScript, and debug fewer framework quirks. Most importantly, greenfield projects deserve choices made on merit rather than defaults.
These alternatives are especially compelling for mobile-first applications where bundle size directly impacts user experience. They matter for the growing demographic of people who prefer phones over computers. Mobile professionals like real estate agents, field service workers, healthcare staff, delivery drivers, and sales reps benefit most. Teams building internal tools or MVPs without enterprise politics constraining decisions can move faster. Developers who value technical excellence over popularity contests will appreciate the engineering quality. Importantly, teams save significant money by maintaining a single high-performance web codebase instead of splitting resources between separate web and native applications. This often means smaller teams, lower overhead, and faster iteration cycles compared to organizations maintaining web apps and native mobile apps.
Choosing among the alternatives (organized by primary use case):
Smallest Bundles: Choose Marko for the absolute best bundle sizes (6.8 to 28.8 kB compressed). Marko delivers 44% smaller bundles than the next closest competitor, making it the clear winner when bundle size is your top priority. The MPA architecture ships minimal JavaScript per page, staying lean as you add routes. The developer experience is excellent once you embrace its streaming model. Note: Marko 6 is currently in beta (tagged as next on npm) and expected to leave beta by end of year, with no expected API changes but ongoing bug fixes and optimizations.
JSX Familiarity: Choose SolidStart if you want the easiest migration path from React. At 128.6 kB raw (41.5 kB compressed), SolidStart uses JSX syntax with automatic dependency tracking that eliminates manual memoization. This delivers 4.39x smaller bundles than Next.js while feeling immediately familiar to React developers. The mental model is actually simpler than React because signals are more straightforward than hooks.
Best All-Around Developer Experience: Choose SvelteKit for approachable syntax and excellent defaults. At 125.2 kB raw (54.1 kB compressed), SvelteKit delivers 4.51x smaller bundles than Next.js with progressive enhancement by default and minimal framework overhead. The compiler-based approach means less runtime code and cleaner component logic. Best for developers from any background seeking readable code with few framework quirks.
Resumability Pattern: Choose Qwik City if you have a larger application that demands immediate interactivity on load with significant client-side functionality. At 88.5 to 114.8 kB raw (43.6 to 58.4 kB compressed), Qwik uses resumability instead of hydration, yielding instant time-to-interactive. Different architectural approach that solves different scaling problems.
Established Ecosystem: Choose Nuxt if you want Vue’s mature plugin ecosystem with competitive mobile web performance. At 224.9 kB raw (72.3 kB compressed), Nuxt proves that established “big three” frameworks can achieve next-gen performance when properly configured. Best for teams already familiar with Vue, projects that benefit from extensive community plugins, or teams that value the safety of a well-established framework. Nuxt bridges the gap between the familiar and the performant.
Important scaling consideration: Marko’s MPA architecture ships minimal JavaScript per page (stays lean as you add routes), while SPAs like SvelteKit and SolidStart ship routing and framework runtime upfront then add route chunks. Both use code splitting, but th