Generating thumbnail placeholder images for web pages
Introduction
While working on a web app with a landing page containing a bunch of images, I did the usual things to make it load faster and feel more responsive:
- Set each
<img>
element'swidth
andheight
attribute to the image's intrinsic size so the browser can allocate the right amount of space without needing to shift the surrounding content later. - Make the images
responsive
by using the
scrset
element to specify versions at various resolutions, so that a device with a low device pixel ratio doesn't need to download big image files just so it can scale them down later. - Create WebP versions of the images and use the
<picture>
element to automatically fall back to less-efficient formats on browsers that don't support WebP. - Set
<img>
elements'loading
attribute tolazy
so the browser won't bother downloading them while they're offscreen.
Even with these tricks, there was often still a jarring moment when the images would pop onscreen, especially when using a slow network connection.
Looking at the bigger picture, I was supplying the image URLs via a JSON file that the browser would fetch separately from the main page, requiring multiple round trips before images could be shown:
- Download, parse, and lay out the main
index.html
file. - Download, parse, and execute the JavaScript that implements the site.
- Download and parse the JSON file.
- Finally, download and display each image.
I added <link
rel="preload">
elements to the index.html
file to let the browser start
downloading the JavaScript and JSON files early in the first step, which helped.
But thinking about it some more, I realized that I could do even better by including an approximation of each image's data in the JSON file and displaying it until the actual image was loaded.
Looking at BlurHash
I'd previously come across BlurHash, an algorithm that shrinks a image down to a string of 20-30 characters that can later be decoded into a blurry placeholder image. I had some concerns above using it here, though.
The BlurHash TypeScript module
decodes the hash to a
Uint8ClampedArray
containing RGBA pixel data. The usual way to display those pixels on the web would be by creating a
<canvas>
element and then passing the data to it, but my app was using the Vuetify
framework, and the <v-img>
component exposes a lazy-src
property that expects a URL. To create a
data URL that I
could pass to Vuetify, I could call the canvas's
toDataURL
method, but decoding the original string using the
BlurHash algorithm and then
loading it into a canvas just to re-encode it in a different image format seemed like it might be too much
work for slower devices.
Worse, it seems like the JavaScript code to do all this would be blocking the main thread while processing all of the page's images. When I tried this approach on a fast computer it took about 2 milliseconds per image, which was slower than I was comfortable with.
Generating GIF thumbnails
Next, I tried creating my own GIF thumbnails and realized that a 3x3 or 4x4 Base64-encoded GIF image is
often just on the order of 115-140 bytes. The first and last 30-40 bytes of my images (consisting of data:image/gif;base64,
and common header/footer information) are usually the same for identically-sized thumbnails, which means
that these portions are effectively amortized across all of the images when my pages are transferred with
gzip encoding.
That left me with 60 to 70 unique bytes per image, which doesn't seem much worse than BlurHash for my
purposes. And since my JavaScript code is just assigning previously-generated data:
URLs to
images instead of re-encoding anything, the browser is (hopefully) able to decode the images off of the main
thread.
I settled on 4x4 for my images since it felt like the minimum size that conveyed the different colors in most of my full-resolution images. 16x16 thumbnails obviously contain more detail, but at the cost of ten times the space.
You can drag the slider below to see how thumbnails at different sizes look for an example image, although I'd recommend experimenting with your own images.

There's an implementation written in the Go programming language in my static site generator, but the basic approach is just:
- Scale the image down to a tiny thumbnail size (4x4, in the case of this site). I stuck with square thumbnails for simplicity's sake instead of using something closer to the original image's aspect ratio.
- Encode the thumbnail as a GIF. Quantization is a complicated subject, but I just went with the obvious approach of allocating a color for each pixel, resulting in a 16-color palette for each thumbnail.1
- Base64-encode the GIF data and prepend
data:image/gif;base64,
to it.
I also experimented with using the PNG format instead of GIF, but the data was typically larger, especially at smaller sizes: without Base64 encoding, an 86-byte 4x4 GIF became a 327-byte PNG (after using pngcrush), and a 1086-byte 16x16 GIF became a 1092-byte PNG.
Displaying thumbnails as placeholders
In my original Vuetify app, I just assigned each thumbnail data URL to the corresponding <v-img>
element's lazy-src
property and was done. Vuetify automatically displays the thumbnail first
and then performs a nice fade transition to the real image after it's been loaded.
After that success, I decided to try to use the same approach for the images on my personal website (i.e. the place where you're reading this).
AMP: <amp-img> and ugly upscaling
For the AMP versions of my pages, adding placeholder images was also simple: just add
another <amp-img>
element
with the
placeholder
attribute as a child of the real <amp-img>
element. If you already have a nested
<amp-img>
with the fallback
attribute, add the placeholder as a sibling of
it.
Chrome and Firefox (and probably all browsers) appear to only bilinearly interpolate the thumbnails when scaling them up, which looks ugly — the horizontal and vertical gradients are very apparent. Assuming that your web browser doesn't use a fancy scaling algorithm, you can see what I'm talking about in this 4x4 thumbnail that's been scaled to 200x200:
I applied a filter: blur(12px)
CSS property, which helped immensely. You can see the additional
blurring in action in the earlier example image.
Regular HTML: CSP and blurry edges
The non-AMP pages were far trickier. I'm trying to keep my website's JavaScript to a minimum, so I stayed
away from approaches that involved changing src
attributes dynamically or showing or hiding
elements.
At first, I thought I could just set each <img>
element's
background-image
property to the thumbnail data URL. However, I'm using
Content Security Policy to restrict the
places from which my site will load scripts, stylesheets, and images2, and I didn't want to need to add the 'unsafe-inline'
source to my
style-src
directive so I could set style
attributes on elements.
I settled on the approach of creating a wrapper element that contains both the thumbnail image and the
full-resolution image, with the former stacked underneath the latter. I initially tried putting the
thumbnail in an <img>
element, but I ran into a lot of trouble due to the thumbnail
having a (square) aspect ratio that typically didn't match the full-resolution image's aspect ratio: even
when I set the thumbnail's width
and height
attributes, those would be overridden
by the square aspect ratio, and the thumbnail would often extend beyond the full-res image or not fully
cover it as a result.
I eventually got things mostly working by setting the min-height
, max-height
,
min-width
, and max-width
CSS properties to 100%
(to match the
wrapper's size, which was derived from the full-resolution image's), but due to the blur filter that I'd
added earlier, the scaled-up thumbnails had blurry edges:
It seems like the usual way to deal with this is by scaling up the blurred image with something like transform:
scale(1.1)
and then clipping its edges by setting overflow: hidden
on the container.
The backdrop-filter
property is supposed to help as well, but support for
it is limited as of early 2022 and I had trouble with the top and left edges of my images still looking
blurry when using Chrome 96.
Switching to SVG
All of this felt like it was getting too hacky, even for me. This
Stack Overflow answer got me thinking about SVG,
and before long, I realized that I could get more control over both the thumbnail's aspect ratio and its
blurring by wrapping it in an <svg>
element.
Here's what my final HTML for a 400x300 image ended up looking like:
<span class="img-wrapper">
<svg width="100%" height="100%" viewBox="0 0 400 300">
<filter id="thumb-filter">
<feGaussianBlur stdDeviation="12"></feGaussianBlur>
<feComponentTransfer>
<feFuncA type="discrete" tableValues="1 1"></feFuncA>
</feComponentTransfer>
</filter>
<image href="data:image/gif;base64,..." width="400" height="300"
filter="url(#thumb-filter)" preserveAspectRatio="none"></image>
</svg>
<picture>
<source type="image/webp" sizes="400px" srcset="...">
<img src="..." sizes="400px" srcset="..." width="400" height="300">
</picture>
</span>
And here's the corresponding CSS:
.img-wrapper {
display: inline-block;
position: relative;
}
.img-wrapper > svg {
position: absolute;
}
.img-wrapper > picture {
position: relative;
}
The basic idea is that the <span>
derives its size from the full-resolution <img>
,
and then the thumbnail <svg>
is scaled to fill the <span>
via its
width
and height
attributes (which accept percentages, unlike <img>
's)
and absolutely positioned underneath the real image. The viewBox
attribute ensures that all of
the enclosed <image>
element is visible, and the
preserveAspectRatio
attribute is used to let the image be stretched to fit.
The blurred image still had partially-transparent edges after all of this, but I was able to use <feComponentTransfer>
as described in this Stack Overflow answer to fix that. I
also had to be careful to only define the <filter>
in the first thumbnailed image to
avoid errors about duplicate IDs.
While testing under various browsers, I noticed that Firefox 95.2.0 for Android leaves the thumbnail's original pixels quite visible in the upscaled SVG image:

It looks like the SVG filter isn't being applied, and the upscaling looks even worse than what's used for
regular <img>
elements. This seems limited to SVG filters: the CSS filter
property still works as expected. Even more strangely, Firefox 95.2.0 for Linux doesn't exhibit the problem
— there, I see high-quality blurring, similar to what Chrome uses on both Linux and Android. I searched a
bit but didn't find any bug reports about this.
Conclusion
So, everything seems to work as intended using the final approach (modulo a possible Firefox bug). My HTML files are only a bit larger, and the placeholders make a big (subjective) improvement when I simulate a slow connection in Chrome DevTools (click the "Network" tab and then use the "Slow 3G" throttling preset).
I came across it after writing this page, but the "Progressively loading images" HTTP 203 video contains a thorough comparison of the progressive-loading capabilities built into various image formats. Their "Image compression deep-dive" video is also interesting.
If you've come across any other interesting tricks for hiding image load times, don't hesitate to let me know.
- The GIF format is limited to a maximum palette size of 256 colors, so if you want to create thumbnails that are larger than 16x16, you'll need to be more clever when quantizing the image, i.e. use a quantization library that someone else already wrote. [return]
- This is a static site, so CSP probably doesn't actually do that much for me, but I wanted to learn more about it. [return]