Last month, a Cloudflare engineer sat down with an AI coding assistant and rebuilt Next.js from scratch. Not a wrapper. Not an adapter. A complete reimplementation of the API surface.
The result? 57% smaller JavaScript bundles.
Let that sink in. The same application, the same features, the same developer experience - but with half the JavaScript shipped to users.
This isn't just a new deployment option. It's an indictment of how much performance we've been leaving on the table.
The uncomfortable truth about Next.js performance
I've spent years telling developers that Next.js is fast. And it is - compared to a poorly configured React SPA. But we've been grading on a curve.
Catch Metrics recently crawled 300,000 production Next.js sites. The median bundle size at the 50th percentile was over 1MB of JavaScript. The 90th percentile exceeded 3MB. The worst offender shipped 56MB.
These aren't broken sites. They're normal Next.js applications built by normal developers following normal tutorials. The framework's defaults, combined with typical dependency choices, naturally produce bloated bundles.
We've normalized shipping megabytes of JavaScript because "that's just how modern web development works." Cloudflare just proved it doesn't have to be.
Why this matters more than you think
Here's what 1MB of JavaScript actually costs your users:
On a decent laptop with a fast connection, maybe 200-300ms of parsing time. Annoying but survivable.
On a three-year-old Android phone on a 4G connection - the global median device - you're looking at 1-2 seconds of main thread blocking. The page looks loaded. The buttons are visible. But nothing works. Taps do nothing. Scrolling stutters. The user waits, confused, wondering if they clicked wrong.
This is the INP problem that's been plaguing Next.js sites since Google made it a Core Web Vital. All that JavaScript needs to be parsed and executed before the page becomes truly interactive. No amount of server-side rendering helps when the client-side hydration is choking on a megabyte of code.
The 57% reduction Cloudflare achieved isn't an incremental improvement. It's the difference between a janky experience and a responsive one for a huge portion of your users.
What Cloudflare actually built
Vinext isn't a port or a compatibility layer. It's a from-scratch reimplementation of the Next.js API surface built on Vite instead of Turbopack.
The benchmark numbers:
- Build time: 1.67 seconds vs 7.38 seconds (4.4x faster)
- Bundle size: 72.9 KB vs 168.9 KB gzipped (57% smaller)
They achieved 94% coverage of the Next.js 16 API - App Router, Pages Router, Server Components, Server Actions, middleware, streaming, ISR. The test suite includes tests ported directly from Next.js's own repository.
The part that got headlines: it took about a week, with AI writing most of the code, for roughly $1,100 in API costs.
But here's what's actually interesting: the performance gap isn't because Cloudflare engineers are smarter than Vercel engineers. It's because they made fundamentally different architectural choices.
The architecture problem nobody talks about
Next.js is built on a specific set of assumptions:
- Turbopack as the bundler (or webpack before that)
- A runtime that handles routing, data fetching, and rendering
- Tight integration with Vercel's infrastructure
These choices optimize for developer experience and deployment simplicity on Vercel. They don't optimize for minimal client-side JavaScript.
Vite takes a different approach. It's lighter. It does less magic. The output is closer to what you'd write by hand.
When Cloudflare rebuilt the same features on Vite, the bundle size dropped by more than half - not because they removed features, but because the underlying architecture doesn't carry the same weight.
This is the tradeoff nobody mentions in Next.js tutorials: you're paying a performance tax for framework conveniences. That tax might be worth it. But you should at least know you're paying it.
The Vercel elephant in the room
Let's talk about what's really going on here.
Vercel makes money when you deploy Next.js to Vercel. The framework is open source, but it's optimized for their platform. Features like ISR, middleware at the edge, and image optimization work seamlessly on Vercel and require workarounds everywhere else.
This is a legitimate business model. But it creates misaligned incentives around performance.
A smaller bundle means less compute time, which means lower hosting costs - for everyone except the hosting provider. Vercel has no financial incentive to minimize the JavaScript their framework produces. If anything, heavier applications that need more edge compute and bandwidth serve their business model.
Cloudflare, on the other hand, wants you to deploy to Workers. They benefit when applications are lean and fast. A 57% smaller bundle means lower costs for them and faster sites for their customers.
I'm not saying Vercel deliberately bloats bundles. But I am saying the incentives don't align with minimal JavaScript. And Vinext proves what's possible when they do.
Should you actually switch?
No. Not yet. Maybe not ever.
Vinext is a week old. It's running in production on exactly one notable site (CIO.gov). The test coverage is impressive, but it hasn't faced the chaos of real-world applications with weird edge cases and legacy code.
Next.js, whatever its flaws, has years of battle-testing. Thousands of production applications. An ecosystem of components, tutorials, and Stack Overflow answers. When something breaks, someone has probably already solved it.
If you're building a new project and deploying to Cloudflare Workers, Vinext is worth evaluating. Run the same app through both and compare. But don't migrate your production Next.js app based on benchmark numbers from a blog post.
What you should do instead
The real lesson from Vinext isn't "switch frameworks." It's "your bundles are probably bigger than they need to be."
Here's how to find out:
Actually measure your bundle
ANALYZE=true npm run build
If you haven't set up the bundle analyzer, do it now:
// next.config.js
const withBundleAnalyzer = require('@next/bundle-analyzer')({
enabled: process.env.ANALYZE === 'true',
});
module.exports = withBundleAnalyzer({ /* your config */ });
Open the generated report. Look at the biggest chunks. You'll probably find surprises - full libraries imported for single functions, dependencies you forgot you added, duplicate copies of the same package.
Understand where your JavaScript comes from
Most bundle bloat falls into three categories:
1. Framework overhead
This is the part you can't easily change. Next.js itself ships a runtime for routing, hydration, and various features. Vinext proves this can be smaller, but with standard Next.js, you're stuck with it.
2. Your dependencies
This is where most bloat lives. One careless import _ from 'lodash' pulls in 70KB. A date library, a UI component library, a charting library - they add up fast.
Audit every dependency. Ask: do I need this? Can I use a lighter alternative? Can I import only what I use?
3. Your code
Usually the smallest portion, but still worth examining. Are you shipping code for features most users never touch? Can you dynamic import the admin dashboard instead of bundling it with the public site?
Set a budget and enforce it
{
"size-limit": [
{
"path": ".next/static/chunks/pages/**/*.js",
"limit": "150 KB"
}
]
}
Run this in CI. When someone adds a dependency that blows the budget, the build fails. It's the only reliable way to prevent gradual bloat.
150KB gzipped is aggressive but achievable for most pages. Adjust based on your actual needs, but have a number. "As small as possible" isn't a budget.
Embrace Server Components properly
Server Components are Next.js's answer to bundle bloat, and they work - but only if you use them correctly.
The pattern I see constantly:
"use client"; // Slapped on because useState is somewhere in here
export default function ProductPage({ product }) {
const [quantity, setQuantity] = useState(1);
return (
<div>
<h1>{product.name}</h1>
<p>{product.description}</p>
{/* 500 lines of product details */}
<QuantitySelector value={quantity} onChange={setQuantity} />
<AddToCartButton productId={product.id} quantity={quantity} />
</div>
);
}
That "use client" directive means the entire component - all 500 lines of product details - gets shipped to the browser as JavaScript. For two interactive elements.
The fix:
// Server Component - zero JavaScript shipped
export default function ProductPage({ product }) {
return (
<div>
<h1>{product.name}</h1>
<p>{product.description}</p>
{/* 500 lines of product details - rendered on server */}
<ProductActions productId={product.id} />
</div>
);
}
// Client Component - only ships the interactive part
"use client";
function ProductActions({ productId }) {
const [quantity, setQuantity] = useState(1);
return (
<>
<QuantitySelector value={quantity} onChange={setQuantity} />
<AddToCartButton productId={productId} quantity={quantity} />
</>
);
}
Push "use client" as far down the component tree as possible. The less code inside client components, the less JavaScript you ship.
The bigger picture
Vinext matters less as a product and more as a proof point.
For years, we've accepted that modern web frameworks produce large bundles. That's just the cost of good developer experience. That's just how React works. That's just the tradeoff.
Cloudflare just demonstrated that it's not. The same features, the same API, 57% less JavaScript. The bloat wasn't inevitable - it was a choice.
This puts pressure on Vercel to respond. Competition is good. Maybe Next.js 17 ships with a "minimal runtime" mode. Maybe Turbopack gets optimized for output size. Maybe the defaults change.
It also validates alternative approaches. Astro has been preaching the "ship less JavaScript" gospel for years. Remix took a different architectural path. Now Vinext shows you can have Next.js's API without Next.js's bundle size.
The era of accepting framework bloat as unavoidable is ending. Users on slow connections deserve better. Your Core Web Vitals scores demand better. And now we have proof that better is possible.
What happens next
Vinext will mature or it won't. Cloudflare will push it as a Vercel alternative, Vercel will likely respond with optimizations, and developers will have more choices.
In the meantime, the action item is clear: measure your bundles, set budgets, and stop accepting megabytes of JavaScript as normal.
The 57% reduction Cloudflare achieved with a different architecture? You can probably get 30-40% just by auditing your dependencies and using Server Components correctly. That's the difference between passing and failing Core Web Vitals for millions of users on real-world devices.
Don't wait for framework wars to sort themselves out. The performance improvements are available today, with the tools you already have.
Related reading:
- Next.js Performance Optimization: Fix Core Web Vitals Issues - The specific patterns that hurt your scores
- How to Improve Your LCP Score - Deep dive into Largest Contentful Paint
- Third-Party Scripts Are Killing Your Core Web Vitals - The other half of the bundle problem