Validation and Integration testing covers a lot of this kind of work, and is a big focus of the StarryNight utility.
Long story short, a reference app is needed to baseline the packages. (Which is part of the reason we were dismayed when the Parties example was deprecated, because it no longer provided a baseline for diff-testing D3 data visualizations during upgrades.)
Here’s an example of the CKCC reference app we’re currently piecing together to track the clinical:Meteor packages. You can see it has the build badge.
CKCC Reference App
And you can find the Travis build here:
And the build history here:
After you have a reference app with all the packages you’re keeping track of, it’s as simple as creating a new git branch of the app, upgrading to the new Meteor release, and running the test suites. Here’s what using Travis and the GitHub Status API together looks like, when you do an upgrade of core packages and files:
Pull Request #79 - Merge Develop into Master
The .travis.yml file we’re using is a little more complicated than the ejPSng file that’s the travis-ci-meteor-packages and it’s associated documentation reference. Ours isn’t just launching TinyTest to run unit/verification tests. It’s also doing things like fetching the packages, scanning them for validation test commands, building the validation script from them, and then running the validation suite.
Currently we only have 500 tests out of 5000 implemented for this particular application. I’ve worked on a few other Meteor apps where we had many thousands of tests like this; but this is the first one that’s been able to be open sourced. We have the 2000 tests from MedBookCRFs, 2500 from ClinicalTrials, and the 500 that are currently implemented. But we should have 5k tests running across a few dozen packages by the end of the year.
The advantage of using Validation tests, of course, is that they can walk across technologies when you swap them out and replace them. Rewrote your front-end in React? The validation tests should still work, and allow you to do the rewrite with TDD practices. Same with upgrading the core APIs. Validation tests are able to walk across underlying package upgrades and changes, assuming the upgrades don’t actually break anything.
Anyhow, we started working on a suite of reference apps, clinical demos, and an associated page of packages for clinical meteor. As you can see, it’s very much a work-in-progress. This stuff takes a long time to assemble and put together. But we know where we’re going, and how to get there.
But the general idea is twofold:
a) have tests running on a per-package basis with TinyTest or SpaceJam.
b) have reference apps running integration/validation tests across multiple packages and in a real app environment using Nightwatch/StarryNight or some other Selenium launcher.
We’re currently looking closely at practicalmeteor:mocha for our package verification tests; and StarryNight/Nightwatch for our reference app validation tests. We’ve also set up a repository at http://github.com/clinical-meteor to keep packages that are under Travis integration.
But yeah. When I was an Oracle Admin with NY Presbyterian Healthcare System, and working with Cerner EMR, we had banks and banks of these tests that we ran on Daily, Weekly, Monthly, and Seasonal basis. There’s a testing theorem in systems engineering that you can only test one change at a time. It’s a Six Sigma thing. So we would queue up months of changes at a time. Pharmacy upgrading their robot? Scheduled for integration testing three weeks from now. Lab upgrading their barcode scanners? The following week. Patient portal going online? The week after. And we would run those banks of tests against our reference apps again, and again, and again, and again…