Introduction
My Meteor 3 conversion story starts October 2023 and ends Spring 2025.
Apart from achieving future-proofness with Meteor 3, the upgrade has also caused/motivated me to:
- Upgrade all 3rd party libraries to latest versions
- Establish unit tests for all methods and subscriptions that perform basic sanity and security checks on all of them
- Make other changes that generally improve reliability, performance, standardisations and code clarity
So while it’s been a lot of work, it’s been worth it!
Me
I’m a semi-retired IT professional who used to work in the banking industry building real-time trading systems. Things are different now, but back in the day, reactivity was rare in applications and our systems in banking were all home grown reactive applications (we’re talking 20+ years ago). I’ve always been intrigued about how to build systems that reflected their data state seamlessly & continuously. Anyway, I left banking a while ago and consulted for the most part. I’m an active squash player at my local club and wasn’t impressed with the systems available to manage the processes involved for members and the club, so I started to build ManageMyMatch in 2017 having identified Meteor as an interesting framework to work with. And so my Meteor journey began…
The app
The app is a club management system for racquet sports (Squash, Tennis etc). Size-wise we have
- 730 Template files
- 1,500 JS files
- 115,000 JS source code lines
Having completed the conversion we have 3,236 Meteor async calls:
- 1,500 findOneAsync()
- 320 fetchAsync()
- 64 insertAsync()
- 836 updateAsync()
- 75 removeAsync()
- 47 forEachAsync()
- 294 mapAsync()
- 96 countAsync()
- 4 observeAsync()
Client/server interface
- 519 Meteor methods
- 206 publications
My app uses Blaze as well as dozens of 3rd party packages some of which required custom upgrade to get them to work again.
Summary & general approach
It took about 8 months of solid work to complete. Half of that is testing. We’re still not done with testing actually - but we are in user acceptance stage now. There were two phases to the plan (about 4 man-months each):
Pre-meteor 3 upgrade
- Convert all relevant functions to async
- Build unit tests for every meteor method & publication
Post-meteor 3 upgrade
- Remove, re-install & upgrade all 3rd party packages to latest versions
- Get the client app back up and running
- Perform integration testing
- Perform user acceptance testing
- Rollout & re-issue iOS & Android apps
Pre-meteor 3 upgrade
Async conversion
I did initially use a codemon to start the process but it rapidly became clear that the impact of asyncing had to be dealt with by hand in every file. So it was a long trawl. Every change meant iterating up the function call tree to async functions were necessary. In doing so there were two main categories of issues:
Isomorphic code
I have lots of isomorphic code (runs on client and server) and ending up with async template helpers is far from ideal. I needed to minimise the amount of async that ended up in the client. I generally had these scenarios:
- Collection helpers: (I use
dburles:collection-helpers
) I ended up creating two versions where necessary.Collection.helperXyzAsync()
declared in a server folder so the server would use that andCollection.helperXyz()
declared in a client folder so the client would use that. - I stubbed out many meteor methods preventing them from running optimistically on the client and then imported the files that contained async methods at the top of the method function. This process allowed me to audit and refactor what really needed to be isomorphic and has reduced my client footprint
- In the same vein, I audited and segregated dozens of general helper functions for server or client use.
- In some instances, where I had a pseudo package of isomorphic functionality, I restructured things to encapsulate the async requirement into one initialiser (essentially an
async init()
method on a class that encapsulated the functionality). Performance was improved as a happy side product.
JS iterators
In many, many cases, I needed to put async inside standard JS iterators like forEach()
, map()
, find()
, filter()
. You can’t do that as the function itself doesn’t magically become async if the iterator function is async. In every case, I had to refactor:
- I have pretty much deprecated the use of
forEach()
in favour offor (value of array) {...}
. - For
map()
, you could wrap it withPromise.all()
, solet result=a.map(item => fn(item))
becomeslet result=Promise.all(a.map(async item => fn(item)))
. For larger arrays though, I tended to iterate usingfor...of
and pushed each result into a predefined result array - For
find()
I replaced it withfor...of
and setting an external variable with the found result andbreak
ing out - For
filter()
I wentfor...of
route as permap()
though I have done this sometimes:let result=a.filter(item =>!!fn(item))
becamelet result=_.compact(Promise.all(a.map(async item => await fn(item) ? item : null)))
Unit testing
When I started this project I was full of good intentions and wrote some tests. Didn’t last long. And, to be honest, I’ve not regretted it as I am pretty diligent with making sure that what I write works. Bugs exist, of course, but it’s not perceived as any kind of issue by my users. But I could have used them for this migration. So I really felt like I needed to thoroughly test my methods and publications having made material changes to pretty much every single one. Besides, my async conversion efforts above had, for various reasons, left my app unrunnable. In order to get decent coverage the data setup requirement was challenging. I’ve ended up with different approaches for my publication unit tests and my method unit tests.
As a result I have thousands of sanity & security tests that cover about 90% of all server code and most isomorphic code. And I could now prove the sanity of my conversion work before I could get the app back up and running.
At first the process of building up the tests was slow as I iterated and improved the auto-generation processes and testing framework. By the end it’s all very self generating and it’s now very simple (mostly just one line of code) to set a test in place for new subscriptions or methods - so even I might keep up with that.
Publication testing
Using dburles:factory
and by calling relevant meteor methods in my app, I built up a variety of different data models that I could instantiate prior to a batch of tests. The database consists of around 50 collections some of which have deeply nested document structures. These needed to be richly instantiated for the publication tests to be meaningful. I used coverage tools to check that I was running most of the code - I achieved about 90% on average. My publications are grouped in about 25 files so I have a test file matching each. I have a common procedure to run several tests per publication that includes:
- Anonymous access
- Unauthorised access
- Authorised access
- Bad parameters
- No parameters
- Good parameters
This standardisation has ensured I have a consistent security policy for each and every one. For each publication I encode the expected results in terms of what collections are expected to be returned, how many documents in each and how many properties on those documents are expected. They do not check any more semantics than that or other data content. The key aim is to “sanity” check them and get code coverage.
Method testing
The database seeding I had for publications was not enough for methods. So I developed a new strategy. All my collections are fully “documented” using aldeed:simple-schema
. I have also augmented SimpleSchema to allow me to annotate foreign keys. So I know what collection every foreign key points to. The parameters to my methods are also documented with SimpleSchema in the same way (foreign keys are identified). With this, I can inspect the schema and auto-generate all documents inferred from the parameters as well as any documents I know the method would expect. Surprisingly this automatically achieves sufficient seeding to attain my 90% coverage target. As with publications, I have coded the generation of the tests to include similar scenarios as listed for publication above.
Sample documents are auto generated for every test. For example if my method has the parameter {matchId: 'osHdQsYsrvKYzqcBn'}
the test knows this points to my Matches collection and, using the schema, generates a sample Match document. In turn the Match document has several id’s that reference documents in other collections. Again, SimpleSchema tells me what collections they are and so I can generate sample documents for those too. And so on. Thus a complete network of documents is self-prepared for the method function to work on. I have special configuration properties that allow me to easily override and specify specific values for certain document properties (otherwise, values are generated that conform to the SimpleSchema type and definitions) in order to emulate certain scenarios to boost coverage. I can also instruct the test to auto generate other documents as needed. The downside is that this data seeding happens for every individual test and there are thousands of them so the whole test run takes about 15 minutes!
I use mdg:validated-method
and whilst the validate()
function simply checks the syntax of the parameters passed I have an internal, custom validator that is called in every method that also checks on security access and even the validity of foreign keys passed. As a convenience, incidentally, this internal validator returns the documents associated with any foreign keys passed - and most of my methods involve the submission of at least one id field.
Post Meteor upgrade
With my app completely disabled, but with all the conversion and refactoring supposedly done it was time to upgrade to Meteor 3. Which of course didn’t work as my dependencies were totally shot. So I made careful notes of my packages and removed every one of them (at least all the non standard Meteor ones). Eventually Meteor allowed me to upgrade.
Next I started the process of reinstalling all my packages. Many of them didn’t work and some of them had breaking changes which necessitated more changes to my code. Others I had to investigate alternatives for. So it took a while to put back together. The biggest of these was aldeed:simple-schema
where I went from version 1 to 2. I have hundreds of schema definitions and they all had to be refactored. I’ve also ended up having to fork more packages to make dependency adjustments and/or code changes to get them to work with Meteor 3.
Package upgrade inventory
Here are all the packages (except for ones I wrote myself for the project). I had forked a few before for functional reasons, but I’ve had to fork quite a few more now…
What I had before | What I have now | Action or Issues |
---|---|---|
meteor-base@1.5.1 | meteor-base@1.5.2 | Updated |
mobile-experience@1.1.0 | mobile-experience@1.1.2 | Updated |
mongo@1.16.7 | mongo@2.1.0 | Updated |
blaze-html-templates@1.2.1 | blaze-html-templates@3.0.0 | Updated |
reactive-var@1.0.12 | reactive-var@1.0.13 | Updated |
jquery@1.11.10 | jquery@3.0.2! | Updated |
tracker@1.3.2 | tracker@1.3.4 | Updated |
standard-minifier-css@1.9.2 | standard-minifier-css@1.9.3 | Updated |
standard-minifier-js@2.8.1 | standard-minifier-js@3.0.0 | Updated |
es5-shim@4.8.0 | es5-shim@4.8.1 | Updated |
ecmascript@0.16.7 | ecmascript@0.16.10 | Updated |
momentjs:moment@2.29.3 | momentjs:moment@2.30.1 | Updated |
underscore@1.0.13 | underscore@1.6.4 | Updated |
mdg:validated-method@1.3.0 | mdg:validated-method@1.3.0 | Same |
dburles:collection-helpers@2.0.0 | dburles:collection-helpers@2.0.0 | Same |
mdg:validation-error@0.5.1 | mdg:validation-error@0.5.1 | Same |
check@1.3.2 | check@1.4.4 | Updated |
ddp-rate-limiter@1.2.0 | ddp-rate-limiter@1.2.2 | Updated |
email@2.2.5 | email@3.1.2 | Updated |
reactive-dict@1.3.1 | reactive-dict@1.3.2 | Updated |
dburles:factory@1.5.0 | dburles:factory@1.5.0 | Same |
anti:fake@0.4.1 | anti:fake@0.4.1 | Same |
velocity:meteor-stubs@1.1.1 | velocity:meteor-stubs@1.1.1 | Same |
random@1.2.1 | random@1.2.2 | Updated |
shell-server@0.5.0 | shell-server@0.6.1 | Updated |
sacha:spin@2.3.1 | sacha:spin@2.3.1 | Same |
session@1.2.1 | session@1.2.2 | Updated |
tinytest@1.2.2 | tinytest@1.3.1 | Updated |
oauth2@1.3.2 | oauth2@1.3.3 | Updated |
jparker:crypto-md5@0.1.1 | jparker:crypto-md5@0.1.1 | Same |
base64@1.0.12 | base64@1.0.13 | Updated |
dynamic-import@0.7.3 | dynamic-import@0.7.4 | Updated |
server-render@0.4.1 | server-render@0.4.2 | Updated |
service-configuration@1.3.1 | service-configuration@1.3.5 | Updated |
blaze-hot@1.1.1 | blaze-hot@2.0.0 | Updated |
hot-module-replacement@0.5.3 | hot-module-replacement@0.5.4 | Updated |
autoupdate@1.8.0 | autoupdate@2.0.0 | Updated |
meteortesting:mocha@2.1.0 | meteortesting:mocha@3.2.0 | Updated |
accounts-base@2.2.8 | accounts-base@3.0.4 | Updated |
accounts-facebook@1.3.3 | accounts-facebook@1.3.4 | Updated |
accounts-google@1.4.0 | accounts-google@1.4.1 | Updated |
accounts-password@2.3.4 | accounts-password@3.0.3 | Updated |
accounts-ui@1.4.2 | accounts-ui@1.4.3 | Updated |
tap:i18n@1.8.2 | tap:i18n@2.0.1 | Forked - dependencies changed |
percolate:momentum@0.7.2 | percolate:momentum@0.7.3 | Forked - dependencies changed |
aldeed:collection2@2.10.0 | aldeed:collection2@4.0.3 | Updated |
quave:accounts-apple@3.0.0 | quave:accounts-apple@4.0.0 | Updated |
aldeed:autoform@5.8.1 | aldeed:autoform@8.0.0-rc.4 | Required application changes |
aldeed:simple-schema@1.5.4 | aldeed:simple-schema@2.0.0 | Required extensive application changes |
softwarerero:accounts-t9n@2.6.0 | New | |
useraccounts:core@1.17.2 | New | |
lmieulet:meteor-coverage@5.0.0 | New | |
johanbrook:publication-collector@1.1.0 | johanbrook:publication-collector@1.1.0 | Forked - functional fix & dependencies changed |
gwendall:autoform-i18n@0.1.9_2 | gwendall:autoform-i18n@0.1.9_2 | Forked - dependencies changed |
bozhao:link-accounts@2.8.0 | bozhao:link-accounts@3.0.1 | Updated |
tmeasday:publish-counts@0.8.0 | compat:publish-counts@1.0.0 | Replaced |
splendido:accounts-emails-field@1.2.0 | splendido:accounts-emails-field@1.2.0 | Forked - needed asyncing |
cultofcoders:persistent-session@0.4.5 | itgenio:persistent-session@0.4.12 | Replaced, forked - dependencies changed |
rocketchat:oauth2-server@2.1.0 | rocketchat:oauth2-server@2.1.0 | Forked - needed asyncing and dependencies changed |
useraccounts:bootstrap@1.14.2 | useraccounts:bootstrap@1.15.3 | Updated |
reywood:bootstrap3-sass@3.3.5_1 | reywood:bootstrap3-sass@3.3.7_1 | Forked - dependencies change |
communitypackages:autoform-bootstrap3@2.0.0 | New | |
markdown1.0.12 | markdown@2.0.0 | Updated |
aldeed:autoform-bs-datepicker@1.2.0 | aldeed:autoform-bs-datepicker@2.0.0 | Forked - dependencies changed |
rajit:bootstrap3-datepicker@1.7.1_1 | rajit:bootstrap3-datepicker@1.7.1_1 | Same |
tsega:bootstrap3-datetimepicker@4.17.47 | tsega:bootstrap3-datetimepicker@4.17.47 | Same |
kadira:flow-router | ostrio:flow-router-extra@3.11.0-rc300.1 | Forked - needed coffescript dependencies change and a functional fix |
useraccounts:flow-routing@2.12.1 | useraccounts:flow-routing-extra@1.1.0 | Replaced |
kadira:blaze-layout@2.3.0 | kadira:blaze-layout@2.3.0 | Forked - dependency change |
fourseven:scss@4.15.0 | fourseven:scss@4.17.0-rc.0 | Needed: brew install python@3.11 |
aldeed:template-extension@4.1.0 | aldeed:template-extension@4.1.0 | Forked - dependency changed |
arillo:flow-router-helpers | Removed | |
xolvio:cleaner | Removed | |
hwilson:stub-collections | Removed | |
simple:reactive-method | Removed | |
autoform-markdown | Removed | |
http | Removed - use fetch now | |
jkeuster:http | Removed - use fetch now | |
facebook-config-ui | Removed | |
facebook-oauth | Removed | |
quave:apple-oauth | Removed | |
google-config-ui | Removed | |
meteorhacks:picker | Removed - now using webapp | |
practicalmeteor:chai | Removed | |
practicalmeteor:sinon | Removed | |
meteorhacks:zones | Removed | |
dispatch:mocha | Removed | |
fetch | Removed - now using npm package |
Getting the app to work again
Once I got the build working I fired up the browser to see what happened. It didn’t work, of course. So I had a period of a few weeks where I was wholesale fixing and refactoring throughout the app to the point where I eventually had the app basically able to operate but still with many bugs and issues. There was much work in upgrading the client for the following reasons, though the process did involve a lot of server-side fixing now that I was using the app for real (remember the automated tests are security and sanity checks - they don’t achieve real-world semantic or scenario testing):
aldeed:simple-schema upgrade
I use schemas to check the data passed into my templates. I have a lot of templates - 732 of them and most of them take some kind of data. So all my template simple-schemas had to be upgraded. At the same time I took the opportunity to better standardise on this practice.
async functions
Despite minimising the bubbling up of async functions having to be dealt with by the client, there’s still lots of them. Sometimes they are innocuous as the new Blaze does a good job of resolving them. But other times action has to be taken. Here’s two examples:
If I have {{> nestedTemplate data=getAsyncHelperValue}}
in my blaze template, poor old nestedTemplate receives a promise. No good. So I’d generally change this to:
{{#let asyncHelperValue=getAsyncHelperValue}}
{{#if @resolved "asyncHelperValue"}}
{{> nestedTemplate data=asyncHelperValue}}
{{/if}}
{{/let}}
Another issue was the loss of reactivity within an async function - so my UI did not update as expected. You lose reactivity after the first await inside an async reactive function. Often I could resolve the issue just be ensuring the reactivity that was really important was as a result of the first await in the function. But sometimes I needed reactivity based on more than one reactive source change. I then had to resort to using Tracker.withComputation
that restores reactivity after losing it from the first await. Truth be told, I thought this was going to be a major problem for my app. But it turns out there were relatively few instances where I had to use Tracker.withComputation
.
method calls
My method calls in the client would generally follow this pattern, using mdg:validated-method
:
myMethod.call(params,(error,result) => error ? showError(error) : process(result))
Often I do not need to process any result so that becomes simply
myMethod.call(params,showError)
showError()
is my ubiquitous “Oops, sorry, something went wrong” message to the user.
I wanted to keep this pattern because I have 801 method calls. But I needed to use the callAsync()
method on mdg:validated-method
because all my methods were now async and that would mean changing the pattern and having to async my functions. So I implemented a new method on this class called ‘callSync()’ that went as follows:
ValidatedMethod.prototype.callSync=async function(parameter,callback) {
if (_.isFunction(parameter)) {
callback=parameter;
parameter=undefined;
}
try {
let result=await this.callAsync(parameter);
if (_.isFunction(callback)) callback(null,result);
} catch (error) {
if (_.isFunction(callback)) callback(error);
}
};
Then it was just a matter of replacing all method.call()
invocations into method.callSync()
invocations.
Integration testing
Having reached the point where wholesale changes seemed to be a thing of the past, I now needed to touch every part of the functional app. It’s a big system. Hundreds of different screens and functionalities and combinations. I installed and built a few tests with Cypress, but I soon realised that it would take me a decade to set up enough tests in cypress (with good seeded data) to make them meaningful. So I ditched that and went manual. Instead I used Dynalist to quickly and easily (it’s very good) build a deep nested structure (using several separate files) that reflected full navigation paths throughout the app. This allowed me to at least record what needed to be tested and introduced the discipline to reach every part of the app. It was no good testing the 10% of the app that was used 90% of the time since every single bit was subject to change and risked not working. So I had to test the 90% of the app that was used only 10% of the time. Though the former did receive greater attention.
The app has many points of integration to the outside world and these too needed to be included in the test plan. This included:
- Private API - I expose my methods and publications through a REST interface which is used externally. Mainly AWS Lambda functions as well as an AWS API implementation that exposes a limited public API to the system
- Social logins and management - Facebook, Google & Apple
- Integrations with Stripe, Doorflow (a security system), EposNow (a PoS system) and Mailchimp
- OAuth2 flows - with external systems as well as the apps own OAuth2 service.
- Communications to networked devices (SNMP)
- Webhooks
It was always satisfying to find a bug that was already there from before and fixing that - I found quite a few! Who knows, the new version might have less bugs than the old by the time it goes live!
User acceptance testing
This consists of two phases; Alpha and Beta. The fact that the database has not changed and that testing so far gives us confidence that the new app won’t corrupt the database is a huge benefit allowing us to test the new app in a live environment. In doing so we can compare and contrast screens and behaviour between the production site and the test site at all times. There have been minimal visible changes so the screens should match precisely making it easy to spot unexpected differences in behaviour. The test version has hotjar installed making it easy for users to provide feedback and for us to track behaviour
Alpha testing
I deploy the app (to Galaxy) with an alpha subdomain so that my internal team can use the new app against the production database. This phase lasts 1 month where we can root out many of the remaining issues whilst using the system for real.
Beta testing
I deploy the app with a beta subdomain and widen out its use by inviting friendly customers and users to use the new system. This phase may last 1 or 2 months, depending on how it goes, for further opportunities to find bugs while the system is used by people of different roles.
Rollout and re-issue of iOS & Android apps
This is the final stage of the project. The production domain is upgraded to the new app at the same time as we deploy compatible new versions of the iOS and Android apps. It’s normally a bit of a battle to get the apps building, sadly.
Conclusion
At first the task seemed daunting. But even then I’m not sure I realised just how much work was involved. But as with any project, it was a matter of breaking it down and building lists of what had to be done in an orderly fashion. And then tracking progress on each task and seeing that progress slowly reach 100% complete. Take the unit tests for the meteor methods for example. There are 519 of them. Every one needed a test. Some days I’d get dozens done. Other days only a few. But sooner or later I knew that 100% would arrive! We are now in a much better place, technologically, than before and this upgrade has forced us to use where possible all the latest packages and components as well as having established reasonably comprehensive test harnesses and procedures.
While it’s been a lot of work for me, I know it’s been many times more for others without whose work this could not have been achieved. So thanks as ever to the Meteor team and the community for making it possible!
I now look forward to many more years of happy Meteoring