Skip to content
Rezha Julio
Go back

Stop Using innerHTML: The New Firefox Feature That Kills XSS

5 min read

We’ve all done it. You get some HTML string from an API (or worse, user input), and you need to render it on the screen. The quickest, dirtiest, and most common way? Good old innerHTML.

// The classic footgun
const userBio = getUserInput();
document.getElementById('profile').innerHTML = userBio;

It works instantly, but it’s also the primary reason Cross-Site Scripting (XSS) vulnerabilities have plagued the web for two decades. If userBio contains <script>stealCookies()</script> or <img src="x" onerror="alert('hacked')">, your app just executed malicious code.

For years, the solution was to bring in heavy third-party libraries like DOMPurify. But as of Firefox 148, the browser finally gives us a native, built-in solution: The Sanitizer API and setHTML().

Enter setHTML()

The new setHTML() method is essentially a drop-in replacement for innerHTML, but with one important difference: it runs the HTML string through a native sanitizer before it gets attached to the DOM.

Instead of this:

el.innerHTML = dirtyString;

You now do this:

el.setHTML(dirtyString);

That’s it. No matter what sanitizer config you pass, setHTML() will always strip out XSS-unsafe elements (<script>, <iframe>, <frame>, <embed>, <object>, and SVG <use>) and inline event handlers (like onclick or onerror). Even if you explicitly allow <script> in your config, it still gets removed. Safe by default, no exceptions.

But the default sanitizer goes further than just blocking scripts. It also strips out elements like <img>, <style>, <form>, <input>, <button>, <video>, <template>, custom elements, data- attributes, and more. Basically, if it’s not a simple content element (headings, paragraphs, lists, etc.), it’s gone.

How does it handle malicious input?

Let’s look at what happens when an attacker tries to inject something nasty:

const attackerPayload = `
<h1>Hello!</h1>
<script>alert('Stealing tokens...');</script>
<img src="cute-cat.jpg" onload="sendData()">
<a href="javascript:evil()">Click me</a>
`;
const container = document.getElementById('content');
container.setHTML(attackerPayload);
console.log(container.innerHTML);
// Output:
// <h1>Hello!</h1>
// <a>Click me</a>

Notice what happened:

  1. The <script> tag was stripped out entirely.
  2. The <img> was removed completely — the default sanitizer doesn’t allow it.
  3. The href="javascript:..." was stripped from the <a> tag.

If you want the most permissive mode — only strip XSS-unsafe elements, keep everything else — pass an empty config:

container.setHTML(attackerPayload, { sanitizer: {} });
// Output:
// <h1>Hello!</h1>
// <img src="cute-cat.jpg">
// <a>Click me</a>

Now the <img> stays, but the onload handler and javascript: URL are still gone. You can’t sneak XSS through setHTML(), period.

All of this happens natively inside the browser engine, without needing to ship a JavaScript sanitizer library to your users.

Can we customize the Sanitizer?

Yes! You can build your config two ways: as an allow list (only these elements get through) or a remove list (everything except these gets through):

// Allow list: only allow these elements
const mySanitizer = new Sanitizer({
elements: ['p', 'em', 'strong', 'a'],
attributes: ['href']
});
el.setHTML(dirtyString, { sanitizer: mySanitizer });
// Remove list: allow everything except these
const strictSanitizer = new Sanitizer({
removeElements: ['img', 'table', 'style']
});
el.setHTML(dirtyString, { sanitizer: strictSanitizer });

You can also use the Sanitizer object’s methods to build configs programmatically:

const sanitizer = new Sanitizer({});
sanitizer.allowElement('p');
sanitizer.allowElement('em');
sanitizer.allowAttribute('href');

Why not just keep using DOMPurify?

DOMPurify works great and has been the go-to for years. But moving this responsibility to the browser has real advantages:

  1. Zero Bundle Size: You ship less JavaScript.
  2. Performance: Native browser code is almost always faster than parsing and mutating the DOM via JavaScript.
  3. Always Up-to-Date: Browsers update automatically to patch new, obscure XSS vectors. You don’t have to worry about bumping your npm dependencies every time a new bypass is discovered.
  4. Context Awareness: The browser inherently understands its own DOM parsing quirks better than a polyfill can.

Browser Support & The Path Forward

Right now, Firefox 148 is the first browser to ship this enabled by default (released February 24, 2026). Chrome has an implementation available in Canary behind a flag. Safari hasn’t started implementation yet, though the WebKit team has expressed a positive position on the spec.

One thing worth noting: setHTMLUnsafe() — the version that doesn’t enforce XSS-safety — already has cross-browser support since 2024. So the unsafe counterpart is available now, and the safe version is catching up.

So, what should you do today?

If you’re building modern web apps, start keeping an eye on your innerHTML usage. You can’t safely switch 100% of your codebase to setHTML() in production just yet, but the days of blindly dumping strings into the DOM are numbered.

Bonus: Trusted Types and Document.parseHTML()

Firefox 148 also ships two related features worth knowing about:

Document.parseHTML() is a companion method that parses an HTML string into a full Document object (instead of injecting into an existing element). Same sanitization rules apply — XSS-unsafe content is always stripped.

Trusted Types is a separate API that lets you lock down all dangerous sinks (innerHTML, outerHTML, document.write, etc.) at the CSP level. You set a Content-Security-Policy header:

Content-Security-Policy: require-trusted-types-for 'script'

After that, passing a raw string to innerHTML throws a TypeError. You have to go through a policy function first. Combined with setHTML(), you get defense in depth: Trusted Types forces developers through audited code paths, and setHTML() makes sure the output is safe regardless.

If you want to play with the API right now, Mozilla has a Sanitizer API playground where you can test different configs and see what gets stripped.

Once the other browsers catch up, there won’t be a good reason to reach for innerHTML again.


Related Posts


Previous Post
Next Level Coding with Amp: Planning Before Prompting