Published on March 28, 2024 (9 months ago)

A clear look at blurry image placeholders on the web

Wesley Luyten
By Wesley Luyten7 min readEngineering & Video education

Nobody likes waiting for webpages to load, but there’s one group that’s annoyingly vocal about it: developers. I should know, I’m one of them.

Besides making the UI actually load faster, some devs also try to trick users into making the app look faster than it really is with strategies like loading animations or skeleton placeholders.

There’s another way to improve the user experience and keep users sticking around slightly longer: blurry image placeholders, also known as Low-Quality Image Placeholders (LQIPs) or the “blur up image” technique.

Over the years, several implementations of this technique have come on the scene. Let's take a look at the most widely used techniques side by side in our comparison page and find out the benefits and drawbacks of each. We’ll also share what helped us choose the right technique for our project, so you can do the same.

LinkWhat’s a Low-Quality Image Placeholder?

So how did this blurry revolution begin? Facebook’s engineering team first introduced this technique in 2015, when they faced a problem in their native apps. The cover image was too large and too slow to load on some devices. The user experience of being presented with a solid color first covering much of the page and finally switching to the image was quite jarring.

The high-level solution is really simple: resize the original image to a tiny image (around 40px wide) while maintaining the aspect ratio, compress it with JPEG encoding server side then on the client upscale the image and apply a Gaussian blur filter.

Facebook implemented some extra tricks with the file header, allowing them to leave the majority of its contents out of the network payload and construct it again in the client — saving another chunk of data. This produced a payload of 200 bytes or smaller, making it compressed enough to send in the initial network request.

Next.js uses this exact same technique in the next/image component to show a blurry placeholder while the actual image loads.

Cross-browser support for the WebP file format has finally become standard. This file format provides an even better compression than JPEG, resulting in a file size approaching only 100 bytes if the tiny image has a maximum size of 16x16 with a quality around 70%. For our use case this size provided sufficient detail. Going below 16px is not recommended as WebP encodes everything in 16x16 blocks.

Here’s an example with and without applying the Gaussian blur:

The LQIP approach works great on the web, where a tiny image is Base64 encoded and used as a source for an SVG <image> element. A Gaussian blur filter is added to the SVG and the whole markup in turn is used as a data URL for a HTML <img> or CSS background-image.

html
<img src="https://image.mux.com/m00b01mJ2BQP4GMYXKoOmgRdnHELCPpYFtIO52h01l9ozY/thumbnail.webp" style=" color: transparent; background-size: cover; background-position: center; background-repeat: no-repeat; background-image: url('data:image/svg+xml;charset=utf-8,\ <svg xmlns=&quot;http://www.w3.org/2000/svg&quot;>\ <filter id=&quot;b&quot; color-interpolation-filters=&quot;sRGB&quot;>\ <feGaussianBlur stdDeviation=&quot;20&quot;/>\ <feComponentTransfer>\ <feFuncA type=&quot;discrete&quot; tableValues=&quot;1 1&quot;/>\ </feComponentTransfer>\ </filter>\ <g filter=&quot;url(%23b)&quot;>\ <image width=&quot;100%&quot; height=&quot;100%&quot; href=&quot;data:image/webp;base64,UklGRmwAAABXRUJQVlA4IGAAAAAQAgCdASoQAAgAAQAcJbACdLoAAwi2bUSAAP74Qu6oOFirJlY8OZVMZBXX2e9f/SRsDS2UEX0Lxo/JvCWyFRzjaBYzny/6POMaoi3hj6+5/8zllIyezfJeEHJ/ROthhAA=&quot;/>\ </g>\ </svg>\ '); " decoding="async" />

Let’s break the HTML snippet down. The first part is an <img> element with a source URL to the full resolution image which is loaded asynchronously and can take some time to load depending on the network conditions.

html
<img src="https://image.mux.com/m00b01mJ2BQP4GMYXKoOmgRdnHELCPpYFtIO52h01l9ozY/thumbnail.webp" decoding="async" />

This next part is the image style attribute that will show a centered, not repeating, background image, in our case the blurry image placeholder. This image will be rendered immediately on page load.

html
style="color:transparent;background-size:cover;background-position:center;background-repeat:no-repeat;background-image:url('')"

Finally the last part is the Base64 encoded data URL that holds an SVG with the 16px wide WebP image and an SVG Gaussian blur filter that smooths out any compression artifacts. Check out this CSS-tricks article for the details on the SVG filter.

html
data:image/svg+xml;charset=utf-8,\ <svg xmlns=&quot;http://www.w3.org/2000/svg&quot;>\ <filter id=&quot;b&quot;color-interpolation-filters=&quot;sRGB&quot;>\ <feGaussianBlur stdDeviation=&quot;20&quot;/>\ <feComponentTransfer>\ <feFuncA type=&quot;discrete&quot; tableValues=&quot;1 1&quot;/>\ </feComponentTransfer>\ </filter>\ <g filter=&quot;url(%23b)&quot;>\ <image width=&quot;100%&quot; height=&quot;100%&quot; href=&quot;data:image/webp;base64,UklGRmwAAABXRUJQVlA4IGAAAAAQAgCdASoQAAgAAQAcJbACdLoAAwi2bUSAAP74Qu6oOFirJlY8OZVMZBXX2e9f/SRsDS2UEX0Lxo/JvCWyFRzjaBYzny/6POMaoi3hj6+5/8zllIyezfJeEHJ/ROthhAA=&quot;/>\ </g>\ </svg>

LinkBlurHash: making blurry images even smaller

A few years later, the engineering team over at Wolt came up with an even more compact representation of a blurry image: a short string that is only 20-30 characters (~bytes).

Hey, we’ve been using the BlurHash library for showing snazzy video poster placeholders!

So how did they condense it into such a neat little package? BlurHash applies a simple Discrete Cosine Transform to the image data, keeping only the key components, and then encodes these components using a Base83 encoding, with a JSON, HTML and shell-safe character set.

LinkThumbHash: more detail in the same dainty parcel

More recently, a similar library called ThumbHash was released that encodes even more detail in the same space and adds other benefits like encoding the image’s aspect-ratio and supporting images that contain an alpha channel (transparency).

This all sounds pretty great to us developers, and seems so to the whole industry — more efficient compression must be better, right? It depends ...

LinkMore compression, more problems

One big drawback is that these formats need to be decoded on the client side — and, unlike WebP or JPEG and blurring filters, no clients support this natively. For the web, this means you’ll always need JavaScript to decode the placeholder, which partly defeats the purpose of showing a placeholder until things are loaded. Generally, blocking JS is loaded after the HTML is parsed and rendered, so the placeholder images wouldn't be displayed immediately.

What if instead you transform the blurhash to a Base64 image before sending it to the client? This approach does work, and is exactly what we were doing with next-video (along with many devs before us).

This approach seemed sensible enough — that is, until we started poking our video ingest engineer extraordinaire Justin Greer. We thought it’d be neat to consider adding this Base64 blurhash response to the Mux video API so we could relieve API consumers from the need to install an image processing library like Sharp and the BlurHash library.

Justin quickly put things in perspective:

If you transform the image to a blurhash and back to an image, then send the image to the customer, what you've done is treat BlurHash as a preprocessing filter that loses the majority of its benefits.
Justin Greer

That hit home! For these hash libraries to be effective, the short encoded string should be stored server side — and stay in that form until it arrives at the client. After all, the resulting Base64 image from the blurhash can be about 10 times larger than the blurhash size.

It seems like the blurhash would produce a more accurate representation of the original image than a tiny, upscaled, blurred-out image since the blurhash encoding was created specifically for the task. Unfortunately the opposite is true.

I created a comparison page where you can see all the different approaches in one place. The blurhash encodings perform quite well for photos without any hard edges. However, as you can see above, they can produce some unintended artifacts.

LinkWhich technique is right for you?

The blurhash encodings are optimized for the smallest encoding possible, but come with some serious drawbacks depending on the platform. Since we mostly build our apps for use in the browser, selecting the LQIP/blur up technique over blurhash encoding is an easy choice to make. The encoded size might be about 150 bytes larger, but the increased quality and avoiding the requirement of JavaScript to render the blurhash is well worth it.

Let us know what you think, you can find us over at @MuxHQ.

Written By

Wesley Luyten

Coder, mnmalist, perf junkie and big into UI and video tech. Austinite originally from Belgium who loves exploring and biking. Jumping sheep hills once in a while at 9th Street BMX Park.

Leave your wallet where it is

No credit card required to get started.