Yes, but my point was that by taking any route other than revalidating on every single request this same reload pattern could be exhibited by adding the ETag. And since the revalidation request would still yield a rendering of the boilerplate in Meteor (due to the dynamic aspects I referenced previously), the net gain would be near-zero or zero.
Ultimately, if caching is done incorrectly, problems will arise.
I do think it was a very fair consideration though!
We have seen infinite-reloads problem pretty consistently, with no relief from any of the commonly-suggested fixes.
We have been using a server-side heuristic that recognizes clients in a reload loop and 404ās the appcache manifest. This works, but it takes several reloads in a row for a client to trigger the heuristic, and then only on the next subsequent reload is the actual appcache restored. So users experience a limited amount of looping on every code update, and would not be able to use the app offline between the 404ās load and whenever they next load (and finally get the updated appcache).
pauldulongās observation above didnāt work for us but did inspire us to a related solution that does. Simply returning false from onMigration appeared to cause clients with an appcache established to never load updates, even during reloads. However, given the knowledge that onMigration false could at least stop the spurious reloads, and doing some testing we saw Meteor triggers a reload the moment the appcache is discovered out of date, and keeps triggering reloads so downloading can never complete. If you monitor appcache fields and hook onMigration you can get reload working as intended, though.
Below is what weāre now using, eliding our app-specific stuff like UI updates. With this code we see reliable single-reload appcache updates and can control the timing of the updates within our appās UI.
if (Meteor.isClient) {
var migrationstate = 0; // 0==none, 2==appcache update in progress
// Monitor appcache downloading state to trigger a reload
// once the download is complete
var appcache_downloading = false;
if (window.applicationCache) {
window.applicationCache.addEventListener('downloading',function() {
if (appcache_downloading) return;
if (window.applicationCache.status !== window.applicationCache.DOWNLOADING) return;
appcache_downloading = true;
console.log("mig: appcache downloading started");
window.applicationCache.addEventListener('updateready', function() {
console.log("mig: appcache ready for restart");
location.reload();
});
});
}
// Hook Meteor's onMigrate to detect the out-of-date condition, trigger an app-cache
// update, and prevent Meteor from prematurely reloading
Meteor._reload.onMigrate( "useractivity", function(retry) {
console.log("mig: got onMigrate event indicating new software is ready", migrationstate);
if (migrationstate>0) return false;
if (window.applicationCache.status !== window.applicationCache.DOWNLOADING) return false;
if (window.applicationCache && (window.applicationCache.status === 1)) {
window.applicationCache.update();
migrationstate = 2;
console.log("mig: triggered appcache to check for and download update");
return false;
} else {
console.log("mig: allowed hotpatch of update given no appcache");
return [true];
}
});
}
We also hit this roadblock and moved to ServiceWorkers instead. Much nicer and more granular, but alas, not supported on iOS yet (we are using an App for mobile, so itās not an issue for us).
The problem is that the reload does nothing more than refresh the current page, which comes from the AppCache, so you keep going in circles. We tried a similar solution to yours but it was too lengthy to debug and test. But the essence is the same, you have to do a full reload of the page, not just a replace like in here
However, I am surprised you didnāt have to modify the AppCache package to not cache / (first line in the caching sections). Are you using stock AppCache package from MDG?
I had the same problem.
My app is deployed via mup as two instances on the same server with nginx load balancing.
I have figured out that autoupdateVersion of meteor apps instances are different. Thatās the cause of the infinite reloading. But instances are the same, at least should be, since autoupdateVersion is a hash of the app code. But they werenāt.
Cause: On deploy, mup.js file was dynamically created in the root of the app for instances with different port and it was counted as part of the app. Thatās why autoupdateVersion was different.
Solution: Do mup deploy from a directory which are not loaded as part of your app code. For example, iāve created .deploy directory.
So I also ran into this problem today and turns out ā¦ I was a bit of an idiot!
What happened was:
I have some tasks (ācronjobsā) running server side. But of course I want these tasks only to run on one of my app (docker) instances.
So what I did is put a server attr into my Meteor.settings.public --> BAD IDEA! Due to that, the Meteor.settings were obviously different from each other --> different meteor.js hash --> weird reloads occuring.
So if you deploy your app multiple times with a nginx proxy in front of it: be sure to upload two exactly equal app packages!
sidenote: I solved my server tasks problem by using a custom process.env var to distinguish the two instances.
This plaged me for a day on my dev and staging server. I thought Iād fixed it several times but what seems to have ultimately fixed the issue for me was removing this line:
res.setHeader('Cache-Control', 'public, max-age=31536000'); // Cache for a year
From the function contained here:
WebApp.rawConnectHandlers.use(function(req, res, next) {
res.setHeader('Access-Control-Allow-Origin', Config.base_url.href.slice(0, -1)); // Remove trailing slash.
//
// IS SETTING Cache-Control CAUSING THE CONSTANT REFRESH?
// See https://forums.meteor.com/t/app-constantly-refreshing-after-an-update/23586/86
//
//res.setHeader('Cache-Control', 'public, max-age=31536000'); // Cache for a year
next();
});
Itās been a week or so now and I havenāt seen it once, whereas it was happening constantly on that fateful day.