Skip to main content
MEWA STUDIO

Progressive Enhancement for Interactive Sites: When JavaScript Fails

Published on May 8th, 2026|12 min read
developmentUXaccessibility

Bundle timeout, CDN outage, corporate browser blocking scripts, runtime error breaking the entire page: JavaScript fails more often than people think. How to build interactive sites that stay usable when the JS layer drops.

White zigzag upward arrow on a dark blue background symbolizing progress despite obstacles

A visitor lands on your site from a train. The 380 KB JavaScript bundle starts loading. The tunnel cuts the connection at 60%. The HTML arrived, the CSS too, but the JS will never hydrate. On most modern sites, this visitor sees a blocked page: the burger menu doesn't open, the contact form doesn't submit, internal links point to routes that expect React Router.

This scenario isn't an edge case. It's what a non-trivial slice of visitors experience without any metric flagging it. JavaScript doesn't fail with a visible error. It fails silently, and with it, the entire experience collapses.

Progressive enhancement is the approach of building a site in layers: HTML that works on its own, CSS that improves presentation, JavaScript that adds interactivity. When the upper layer fails, the site stays usable. It's neither a return to static sites nor a compromise on creative ambition. It's the difference between a site that survives the reality of the internet and one that collapses as soon as conditions degrade.

Today we're dissecting this approach in the context of modern sites: how often JavaScript actually fails, how to build common interactive patterns so they survive that failure, and how today's ecosystem (Server Components, App Router, native forms) makes the approach simpler than it has ever been.

JavaScript fails more often than we measure

A historical study by the UK GDS on GOV.UK (opens in a new tab) established that roughly 1 in 100 visitors loads a page without JavaScript executing. 1% of traffic isn't a marginal case: for a site with 50,000 monthly visits, that's 500 people per month seeing a potentially broken page. If those visitors were on the path to becoming customers, those are conversions lost without a single error in the console.

The documented causes of this failure are many, and most have nothing to do with a user voluntarily disabling JavaScript:

  • The JS bundle that times out on a slow or unstable connection. Mobile in rural areas, trains, hotels, conference venues. The HTML loads in 2 seconds, the 400 KB JS takes 30 seconds or never arrives.
  • The CDN that goes down or gets blocked. A Cloudflare outage, a corporate firewall filtering third-party domains, an aggressive ad-blocker blocking a tracking script bundled with yours.
  • An uncaught runtime error that crashes the application after hydration. A single unhandled undefined on a dependency and the whole page goes inert.
  • An older or alternative browser: in-app browsers in native applications, console browsers, screen readers with partial JS execution, secondary search engine parsers.
  • An AI agent parsing your page: ChatGPT Search, Perplexity, Claude, AI Overview crawlers. Many don't execute JavaScript at all or execute it partially.
  • Corporate policies blocking external JavaScript: public services, hospitals, banks, sensitive environments. These users represent high-value B2B prospects.

On median 4G mobile, parsing and executing 400 KB of JavaScript takes between 1.8 and 3 seconds on a mid-range device, according to web.dev (opens in a new tab) benchmarks. Multiplied by degraded network conditions, the window where the site is technically loaded but functionally blocked can stretch to 10 seconds.

The three-layer logic

Progressive enhancement rests on a simple hierarchy. Three layers stack, each enhancing the previous one without making it required.

LayerRoleIf it fails
HTMLStructure and content. The meaning of the page.The page doesn't exist. No way out.
CSSPresentation, layout, visual identity.The page stays readable but raw. All content remains accessible.
JavaScriptInteractivity, animations, rich experiences.Enhancements disappear. Critical functions keep working through native HTML behaviors.

The three layers of progressive enhancement and their respective resilience

The rule is the inverse of what you see in many modern codebases. A typical React site starts from JavaScript and goes down: everything is in components, the served HTML is an empty shell, the CSS loads through JS. When JS fails, nothing is left. Progressive enhancement starts from HTML and goes up: HTML carries meaning, CSS dresses it, JavaScript enriches it.

This logic deeply changes how you think about a component. A dropdown isn't a <div> made clickable by React. It's a native <details>, or a <button> opening a <ul>, that transforms into a richer interaction when JS is available. The default behavior already exists in the browser. JavaScript only enhances it.

Pattern 1: forms that work without JavaScript

The form is the most sensitive pivot point on a site. It's the moment a visitor becomes a prospect. If JavaScript fails at that exact moment, you lose the conversion even though the visitor was ready to act.

The base rule is that a standard HTML form works without a single line of JavaScript. With an action attribute pointing to an endpoint, a method="post" attribute, fields with name attributes and a submit button, the browser knows how to send the request, follow the redirect and display the confirmation page. That's been the default behavior of the HTML form since 1995.

tsx
// Component that works without JavaScript and enhances with it
export default function ContactForm() {
  return (
    <form action="/api/contact" method="post">
      <label htmlFor="email">Email</label>
      <input
        id="email"
        name="email"
        type="email"
        required
        autoComplete="email"
      />

      <label htmlFor="message">Message</label>
      <textarea id="message" name="message" required minLength={10} />

      <button type="submit">Send request</button>
    </form>
  );
}

// On the server, the endpoint handles both cases:
// - without JS: redirects to /thank-you (302)
// - with JS (client-side fetch): returns JSON
export async function POST(request: Request) {
  const formData = await request.formData();
  await saveContact(formData);

  const accept = request.headers.get('accept') || '';
  if (accept.includes('application/json')) {
    return Response.json({ ok: true });
  }
  return Response.redirect(new URL('/thank-you', request.url), 303);
}

The base form: works without JS, enriched with JS

With this pattern, the form does its job in every case. If JavaScript loads, the client can intercept the submission to provide instant feedback, validate in real time and show a toast. If JavaScript fails, the browser takes over: classic POST, redirect to the confirmation page, less fluid experience but fully functional.

Next.js Server Actions and Remix form actions fit directly into this logic. A server function attached to a form works without client-side JavaScript (classic POST) then transforms into an optimized client-side call once hydration is done. Same code, behavior adapts to what's available.

tsx
// app/contact/page.tsx
import { redirect } from 'next/navigation';

async function submitContact(formData: FormData) {
  'use server';

  const email = formData.get('email')?.toString();
  const message = formData.get('message')?.toString();

  if (!email || !message) {
    redirect('/contact?error=invalid');
  }

  await saveContact({ email, message });
  redirect('/thank-you');
}

export default function ContactPage() {
  return (
    <form action={submitContact}>
      <input name="email" type="email" required />
      <textarea name="message" required />
      <button type="submit">Send</button>
    </form>
  );
}

Next.js Server Action: progressive enhancement by default

This code works without a single line of client-side JavaScript. Next.js serializes the action into an endpoint, the form sends a native POST, the redirect happens server-side. When JS is available, the framework optimizes without changing the code. It's the exact opposite of the classic React form that depends entirely on client-side onSubmit.

Pattern 2: navigation that doesn't depend on JavaScript

A common trap in SPA sites: replacing <a href="..."> tags with <div onClick={navigate}>. Without JavaScript, these "links" are dead. No URL on hover, no right-click to open in a new tab, no navigation, no proper indexing by crawlers, and zero keyboard accessibility.

The rule is non-negotiable: anything that takes the visitor to another page is an <a href>. The router can intercept the click for client-side navigation when JS is available, but the element stays a standard HTML link.

tsx
// The Next.js Link component renders a real <a href> tag.
// If JS loads, it intercepts to do client-side routing.
// If JS fails, the browser follows the href in classic navigation.
import Link from 'next/link';

export function Navigation() {
  return (
    <nav>
      <Link href="/services">Services</Link>
      <Link href="/work">Work</Link>
      <Link href="/contact">Contact</Link>
    </nav>
  );
}

// Anti-pattern to avoid: the "link" as a div
export function BrokenNavigation() {
  const router = useRouter();
  return (
    <nav>
      <div onClick={() => router.push('/services')}>Services</div>
      {/* Without JS: zero function. No URL. Zero accessibility. */}
    </nav>
  );
}

Next.js link: intercepted with JS, functional without

The same principle applies to buttons. A button that triggers a user action is a <button>. A button that takes you elsewhere is an <a href> styled as a button. This semantic distinction has direct consequences: an <a href> works on right-click, on Cmd/Ctrl+click, on keyboard focus, and is understood by every agent (screen readers, crawlers, AI).

Pattern 3: interactive components with native fallback

Most common interactive components today have a native HTML equivalent that can be used as a foundation. JavaScript only serves to enrich the experience on top.

ComponentNative HTML elementJS enhancement
Accordion / FAQ<details> + <summary>Open animation, shared state management
Modal / Dialog<dialog> + showModal()Advanced focus trap, transitions
Dropdown<select> or styled <details>Search, multi-select, autocomplete
Date picker<input type="date">Custom calendar with available slots
Tooltiptitle attribute or popovertargetSmart positioning, rich content
TabsAnchor links + sectionsSwitch without reload, animations

Common components and their native HTML equivalent available in 2026

The <dialog> element is a great example of this evolution. Natively available in every major browser since 2022, it handles modal opening, focus, ARIA accessibility and Escape-to-close without a single line of additional JavaScript. JS is only used to call showModal().

tsx
'use client';
import { useRef } from 'react';

export function ContactDialog() {
  const dialogRef = useRef<HTMLDialogElement>(null);

  return (
    <>
      {/* Without JS: the href opens /contact full-page (functional fallback) */}
      <a
        href="/contact"
        onClick={(e) => {
          // With JS: we intercept to open the modal
          if (dialogRef.current) {
            e.preventDefault();
            dialogRef.current.showModal();
          }
        }}
      >
        Contact us
      </a>

      <dialog ref={dialogRef}>
        <form method="dialog">
          <h2>Let's talk about your project</h2>
          {/* form content */}
          <button type="submit">Close</button>
        </form>
      </dialog>
    </>
  );
}

Native modal with dialog: minimal enhancement

The key pattern is the href="/contact" attribute on the trigger. Without JavaScript, the click follows the link and takes the user to the dedicated contact page. With JavaScript, the event is intercepted and the modal opens. The functional outcome is equivalent: the visitor can reach out in every case.

Pattern 4: animations that degrade gracefully

JavaScript-driven animations are the most frequent source of poorly handled graceful degradation. A GSAP animation that doesn't load often leaves elements in their initial state: permanent opacity: 0, frozen transform: translateY(50px). The content exists in the DOM but isn't visible.

The principle to apply is simple: the default state of an animated element should be the final state, not the initial state. Animation is an enhancement that goes from "not visible" to "visible". If JS fails, the element stays in its final state, hence visible. It's the opposite of what many animation libraries encourage by default.

css
/* Anti-pattern: if the animation JS doesn't load, the element stays invisible */
.fade-in-section {
  opacity: 0;
  transform: translateY(40px);
  /* Animation JS removes these properties on scroll. If JS fails: invisible element. */
}

.fade-in-section.is-visible {
  opacity: 1;
  transform: translateY(0);
  transition: opacity 0.8s, transform 0.8s;
}

Fragile approach: invisible by default, JS must execute

css
/* Recommended pattern: visible by default, animation gated by a "js" class */
.fade-in-section {
  opacity: 1;
  transform: none;
}

/* JS adds a "js-loaded" class on <html> as soon as it executes */
.js-loaded .fade-in-section:not(.is-visible) {
  opacity: 0;
  transform: translateY(40px);
}

.js-loaded .fade-in-section.is-visible {
  opacity: 1;
  transform: translateY(0);
  transition: opacity 0.8s, transform 0.8s;
}

/* Bonus: respect prefers-reduced-motion */
@media (prefers-reduced-motion: reduce) {
  .fade-in-section { transition: none; }
}

Resilient approach: visible by default, conditional animation

The js-loaded class trick on <html> set at the very start of loading is one of the most robust patterns. If JS doesn't execute, the class is never added, the selectors don't apply, the content stays visible. If JS executes, the class arrives before the first paint and the animations take over. Combine with CSS Scroll-Driven Animations (opens in a new tab) for cases where the animation can be delegated to native CSS.

Server Components: progressive enhancement industrialized

React Server Components and the Next.js 15 App Router architecture change the game on this topic. Before, the argument against progressive enhancement in a typical SPA was valid: all rendering happens client-side, so reproducing a static HTML fallback required dedicated effort.

With Server Components, HTML is rendered server-side by default. The page sent to the browser already contains all the structural content. Interactive elements are tagged 'use client' and represent an enhancement layer on top of the server render. The progressive enhancement logic is baked into the framework architecture.

  • Server rendering is the default. Any component without 'use client' produces pure HTML server-side. No dependency on hydration to display content.
  • Server Actions replace client-side fetches. A form with action={serverAction} works as a native POST without JS, then transforms into an optimized call once hydrated.
  • Streaming and Suspense allow progressive content display without waiting for the entire page to be ready. If hydration fails afterwards, the already-rendered content stays.
  • Client boundaries are islands of interactivity in an ocean of server HTML. A crash in one island doesn't break the rest of the page.

The reverse is also true: the trap is wrapping the entire application in a top-level 'use client' to gain flexibility. This practice cancels the entire benefit of Server Components and brings the application back to the JS-dependent SPA model. The rule is to push client boundaries as low as possible in the tree.

Testing in degraded conditions: the method

Progressive enhancement isn't verified by opinion. It's verified by simulating conditions where JavaScript fails. Here's the protocol to test an existing site.

Test 1: Chrome DevTools, full JS disable

Open DevTools, Cmd/Ctrl+Shift+P, type "Disable JavaScript" and enable the option. Reload the page. Browse the site in this state: do links work? Can the contact form be submitted? Does the main navigation open? Is the content of each page readable?

This test immediately reveals components that depend entirely on JS to exist. If a whole region of the page disappears or a critical button no longer responds, that's a fragility point.

Test 2: extreme network throttling + timeout simulation

Still in DevTools, Network tab, select "Slow 3G" then manually block the URL of the main JS bundle (right-click on the request, "Block request URL"). Reload the page. Does the site stay usable? Is content accessible? Do internal links lead somewhere?

This test most faithfully simulates the real scenario: HTML and CSS arrived, JS never came. Many sites pass test 1 but fail here because of inline scripts that depend on the blocked bundle.

Test 3: view-source: to verify content without rendering

Prefix the URL with view-source: in Chrome. Verify that the raw HTML actually contains the page's main textual content. If the <body> tag is empty or only contains a <div id="root">, the content only exists in JS. That's the ultimate red flag: neither crawlers, nor AI, nor users without JS will ever see that content.

Test 4: a screen reader on critical flows

VoiceOver on macOS, NVDA on Windows. Test main navigation, opening a modal, submitting a form. A site well-built with progressive enhancement naturally passes accessibility criteria for interactions (opens in a new tab), because native elements already have everything they need. Conversely, 100% JS components require dozens of ARIA attributes to catch up with what native HTML does for free.

The 6 most frequent mistakes

  • The clickable <div> instead of a <button> or <a>. Without JS: zero function. With JS: no default keyboard focus, no implicit role, degraded accessibility. HTML semantics exist for a reason.
  • Default opacity 0 in CSS for animated elements. If the JS animation doesn't trigger, the content stays invisible. Always start from the final visible state and use a js-loaded class to activate the animation pattern.
  • event.preventDefault() before any test. Many handlers begin with e.preventDefault() without checking whether the default behavior would be a functional fallback. If the code fails after preventDefault, the native function is lost without anything in its place.
  • Forms without an action attribute. A form with a client-side onSubmit but no action is dead if JS doesn't execute. Always define a server fallback endpoint, even if the nominal experience is fully client-side.
  • Over-reliance on client routing. A client-side router can break every internal link if JS fails. Verifying that each URL works in direct navigation (pasting the URL in a new tab without context) is a simple and revealing test.
  • Treating progressive enhancement as a cost. The classic argument is "we don't have time to handle the 1% without JS". In reality, following the pattern produces simpler, more accessible, more indexable and faster code. The cost is in the learning, not in the execution.

Progressive enhancement isn't a step backward

The classic objection to progressive enhancement is that it constrains interactive creativity. It doesn't. Visually ambitious sites (immersive portfolios, WebGL experiences, sophisticated micro-interactions) can follow this principle without sacrificing anything in their nominal experience. What changes is what happens when conditions aren't ideal.

A site that works in 100% of conditions is a site that pays back more. More converted visitors, more visibility in search engines and AI agents, more real inclusivity beyond statements of intent, fewer production bugs, less customer support around "pages that don't display". Investing in a custom-built site protects these gains. Building in layers protects the investment against the failures that happen anyway.