Approach # 2: Service Workers
The second solution to the problem is to reproduce the functionality of import cards using a service worker.
For example, using a service worker, you can listen to
fetch
events that are aimed at loading materials located at addresses corresponding to the keys of the import card. By executing these requests, you can upload files whose names include hashes of their contents:
const importMap = { '/main.mjs': '/main-1a2b.mjs', '/dep1.mjs': '/dep1-b2c3.mjs', '/dep2.mjs': '/dep2-3c4d.mjs', '/dep3.mjs': '/dep3-d4e5.mjs', '/vendor.mjs': '/vendor-5e6f.mjs', }; addEventListener('fetch', (event) => { const oldPath = new URL(event.request.url, location).pathname; if (importMap.hasOwnProperty(oldPath)) { const newPath = importMap[oldPath]; event.respondWith(fetch(new Request(newPath, event.request))); } });
However, taking into account that the above code is a service worker, one must understand that this code will work only after the service worker is installed and activated. And this means that the first time the site is loaded, files will be requested that have no hashes in their names. Files with hashes in the names will be requested on subsequent site downloads. In other words, here we are dealing with double loading of each file.
If you take this into account, it may seem that the service worker is not a suitable solution to the problem of cascading cache invalidation.
However, here I will ask you to allow me to briefly criticize long-standing approaches to caching. Let's think about what happens if you stop using content hashes in the file names, instead of putting the hash information in the service worker code.
This is how tools like Workbox that pre-cache resources work. They generate hashes of the contents of each file from the assembly and store the correspondence of the file names in the service worker (it turns out something like an external import card). In addition, they cache resources during the first installation of a service worker and add
fetch
event listeners that return cached files in response to requests whose addresses correspond to those in the import map.
Although the idea that the client receives files that do not contain information about the versions of their contents may seem frightening (and contradicting everything that you have been taught), the request to download the corresponding resource is executed only when the service worker is installed. Further requests to download such a resource go through the Cache Storage API (which does not use caching headers), and new requests to the server are executed only when a new version of the service worker is deployed (and you need a fresh version of these files anyway).
As a result, until you start to deploy new versions of modules without updating the service worker (and this is definitely not recommended), you will never encounter a conflict or version mismatch.
To organize preliminary caching of files using the workbox-precaching library, you can pass the file addresses and strings with information about the versions of these files to the
precacheAndRoute()
library method:
import {preacacheAndRoute} from 'workbox-precaching'; precacheAndRoute([ {url: '/main.mjs', revision: '1a2b'}, {url: '/dep1.mjs', revision: 'b2c3'}, {url: '/dep2.mjs', revision: '3c4d'}, {url: '/dep3.mjs', revision: 'd4e5'}, {url: '/vendor.mjs', revision: '5e6f'}, ]);
How exactly to generate lines with versions is up to the developer himself. But if he doesnโt want to create them himself, the task of generating a pre-cache manifest will help simplify the workbox-build , workbox-cli and workbox-webpack-plugin packages (they can even generate all the service worker code).
My demo project has an example of implementing pre-caching using a service worker in a Rollup application (using
workbox-cli
) and in a webpack application (using
workbox-webpack-plugin
).
Approach # 3: custom scripts for loading resources
If your site does not have the ability to use either import cards or service workers, then here is the third approach to solving the problem. It consists in implementing the functionality of import cards using its own script for loading resources.
If you are familiar with AMD-style module loaders (like SystemJS or RequireJS ), then you may also know that these module loaders usually support module aliases. In fact, SystemJS supports aliasing using import map syntax. As a result, our problem is easily solved in such a way that this solution will be future-oriented (and, in addition, will work in all existing browsers).
If you use Rollup, then you can set the output.format option to
system
. In this case, creating an import map for the application will be performed in the same way as was described in the description of the first approach to solving the problem of cascading cache invalidation.
My demo application has an example site where Rollup is used to build materials in a format suitable for SystemJS, and to create an import map with which to download hashed versions of files.
โWebpack and loading resources using scripts
Webpack is also able to help you load resources using your own script, but the bootloader generated by webpack, unlike the classic AMD bootloaders, is unique for each specific bundle.
The advantage of this approach is that the webpack runtime can (and so it actually works) include its own mappings between the names / identifiers of the fragments and their addresses (this is similar to what I recommend here). This means that webpack bundles that use code splitting are less likely to suffer cascading cache invalidation.
Actually, the good news for webpack users is that if they correctly configured the project assembly using webpack (dividing the code into fragments, as described in the webpack caching guide ), then a change in the code of an individual module should not lead to invalidation more than two fragments (one is the one that contains the modified module, the second is the one that contains the runtime).
But I have some bad news for those who use webpack to build projects. The fact is that the internal mapping system of this bundler is non-standard. This means that it cannot be integrated with existing tools, and that the user cannot configure it. For example, you cannot independently generate output files (that is, do as described in the story about the first approach to solving the problem) and put your own hashes in the mapping. And this is a minus of webpack, since the hashes used by this bundler are not based on the contents of the output files , but on the contents of the source files and on the build configuration. And this can lead to small and subtle errors (for example - here , here and here - messages about such errors).
If you use webpack to build an application that also uses a service worker, I would recommend using the workbox-webpack-plugin plugin and caching strategy, which was described in the second approach to solving the problem. The plugin will generate hashes based on the contents of the webpack output, which means that you donโt have to worry about the above errors. In addition to this, working with file names that do not have hashes is usually easier than working with names that have hashes.
Other web project resources
Earlier, I talked about how working in JavaScript programs with "hashed" file names containing program code can lead to cascading cache invalidation. But this problem applies to other web project materials.
So, CSS and SVG files often refer to other resources (images, for example), whose names may contain information about the versions of the corresponding files in the form of hashes. As in the case of JS files, to solve the problem of cascading invalidation of the cache caused by changes in the names of similar resources, you can use import cards or service workers.
For resources like images and video files, this presents no difficulties. All existing recommendations apply here.
The main thing here is to remember that always, when file A downloads file B, and, in addition, includes information about the version of file B as a hash of its contents, invalidating the cache for file B will also cause cache invalidation for file A. working with resources, the use of which is organized differently, you can simply ignore the advice given in this material.
Summary
I hope this article inspired you to take a closer look at your site and find out if the issue of cascading cache invalidation affects it. The easiest way to verify this is by assembling the site, changing one line of code in a file that is imported by many modules, and then rebuilding the site. If the names of more than one file have changed in the directory in which the results of the assembly are located, this means that you see a sign of cascading cache invalidation. And if so, then you might need to think about using one of the approaches described here to solve this problem in your project.
If you talk about what is better to choose, then, frankly, it depends on a lot.
When import cards will be widely supported by browsers, then we will face the simplest and most perfect way to deal with cascading cache invalidation. But until such support is available, this mechanism is not applicable in practice.
If you already use service workers, especially if you use Workbox, then I would recommend the second of the approaches to solving the problem discussed here. On the site on which the original of this material is published, the task of preliminary caching of resources was solved in this way.
In addition, service workers are the only option for those who use JavaScript modules in production. (And considering that 98% of my users have browsers that support both service workers and JS modules, it was not difficult for me to choose this option).
If service workers are not suitable for you, then I would recommend the third of the approaches discussed here, involving the use of SystemJS. This approach is better than others, based on bootloader scripts, focused on the future. From it it will be easy to switch to import cards at a time when their support will appear in all browsers.
If we talk about productivity, then the choice of the direction of its optimization depends on each specific project. Before optimizing performance, it is important to measure it, and then decide whether there is a problem, and whether to combat it. If you release new project releases infrequently, and changes made to the project are usually quite large-scale, then the problem of cascading cache invalidation may not be relevant to you.
On the other hand, if you often deploy small project changes, then your returning users may encounter the problem of loading large amounts of code that already exists in their caches. Solving this problem will mean a significant increase in page loading performance for such users.
Dear readers! Does cascading cache invalidation affect your project? If so, please tell us how you plan to solve it.