Meteor Application Testing: a minimum viable design

Hey!

So as part of the guide, we’ve been doing a lot of investigating into testing, planning a testing article, and writing some tests for the todos app. Coincidentally, as we know from Sam’s post, Xolv.io are discontinuing work on Velocity, the official testing framework.

It’s a good point to step back and look at what the Velocity project is trying to achieve and what the framework can do to make it easier. Especially given the upcoming ES2015 module support and how that’ll change things.

I’ve been trying to figure out what the smallest possible change to the framework that would allow proper application testing and unit testing of modules[1] would look like. Then we can look at whether to include that as part of 1.3 or soon after (hopefully!).

Here’s what I have:

Meteor Application Testing

It seems like package/app authors could do all they needed if we supplied three things:

  1. meteor test, which bundles the app as usual, except with the contents of any test/ directory included.
  2. Meteor.isTest, which is true in “test mode” (meteor test and meteor test-packages)
  3. A testOnly flag for packages[2], which meant they are only included when running in “test mode”

What this would allow

A test package developer could create a testOnly package mocha which, when included:

  • Exports the various symbols needed to define a mocha test.
  • Adds client code to show the output of a mocha test run, and calls a publication to get the result
  • Adds a publication which runs mocha when called

(This is basically what practicalmeteor:mocha does already, it’s just right now the only way to get it to work is to use the undocumented --driver-package, which only works for test-packages)

A app developer could depend on mocha in the app, and add tests to test/ folders which register tests via the standard mocha API. Then they could simply run meteor test to get a proper test over them.

A “singleton” package (like flow router) could inspect the Meteor.isTest flag and not autostart in this mode.

Nice to haves, not needed for a first version

  • It’d be great if compiled build output was shared between meteor run, meteor test and meteor test-packages. But it’s not the case that it is already (between run and test-packages), so it’s hardly a deal breaker (i.e. this is strictly better than what there was before).
  • There are some weird command-line options could be rationalized and not made velocity-specific:
    • meteor run —test (which runs the app as usual, but points phantom at it, talking to velocity)
    • meteor test-packages —velocity (which sort of does the same thing, but for test-packages)
  • Although a “run once” API for a test reporter package would be sensible, combined with some kind of phantom script to run client side tests, I don’t see why the reporter package can’t handle the job of printing test results and process.exit()-ing (rather than depending on Velocity). That’s much more flexible.
  • Rather than testOnly or debugOnly, what would be more sensible and consistent with what other build tools do would be to allow users to decide which environment they wish to use a given package in directly (like devDependencies in package.json for instance).

Prior Art

  • @Sanjo has done a lot of work in this direction here already. Potentially most of the work is already done.

  • Also, his other PR does some work towards sharing compilation between different run modes, although it seems to me that a lot of the challenges of Velocity revolved around this sticking point, and as I noted above, it seems a “nice to have”, not a hard requirement.

[1] Which are not packages, and thus don’t have a package.js to describe their tests.
[2] Or alternatively, modified .meteor/packages and package.js to allow adding packages in “test mode”. This would probably be strictly better, but would probably mean changing debugOnly in the same way, so might be more complicated.


So consider this a RFC – do you think this change, along with things that can happen in package-space, would allow application and module testing? Is there anything missing?

Keep in mind we are trying to make sure this happens, which means keeping it as small as possible! (Competing priorities and all that).

Looking forward to comments…

14 Likes

@tmeasday, this all sounds super interesting, thanks for posting this here.

Isn’t this already what Velocity does?

I like this idea a lot. The debugOnly flag has led to a lot of options to start cropping up, like a production only flag, etc.

While I agree that it shouldn’t be a blocker, one of the biggest stumbling blocks around testing is the amount of time it takes to execute the tests against the system. I have watched many a testing setups go down the drains because the devs got tired of waiting and stopped writing tests. Shared compilation is a big win in my book.

3 Likes

It’s great to see movement on this already, these are all positive changes. I respect the minimum viable design approach, and I think with a couple more tweaks you can turn the above into a minimum delightful design!

Package testing is WAY cheaper than app testing in terms of memory and CPU and you can see this in Velocity based frameworks (even ones that use just one mirror). The running of 2 meteor apps means a user would likely run either in normal mode, or in test mode but not both. I agree it’s not a deal breaker, but consider this quick win to improve the user experience when running multiple Meteor apps for testing purposes:

When a user runs meteor test and then runs meteor, the latter would detect the existing running instance and starts a node process on the main.js but this time without the isTest flag set. Perhaps something solution like the Autoupdate Watcher would work here.

The result is a nicer developer experience, and this is one of the major pains-in-the-ass of meteor testing solved :smile:

3 Likes

@tmeasday - I think MDG needs at least a couple of people full-time on this.

Minimal Viable Design really depends on what the testing is for and what industry Meteor is catering to. Minimal Viable Design for healthcare apps is the FDA’s regulation 21CFR820.75, which specifically requires Verification Testing and Validation Testing.

As mentioned elsewhere, SpaceJam is a great solution for the Verification Testing requirement, since it supports the existing TinyTests. Add in Nightwatch, and we have the Validation Testing requirement.

We’re also quite happy with SpaceJam’s recently improved Mocha support, since Mocha recently went into Nightwatche’s core also. It looks like Chai Expect and Mocha have become the default isomorphic API across Verification and Validation testing, as far as 21CFR820.75 goes.

There is also a part of 21CFR820.75 which talks about Specification documents. There may be some opportunity to use Cucumber’s BDD syntax there, and walk people from introductory real-time testing to more structured Specification documents, as required by regulatory agencies.

Yeah, but in order to achieve it right now, Velocity has follow a mirroring approach which is kind of brittle and has had a lot of problems (as I understand it). This proposal is basically looking at ways to do what Velocity does in a way that’s properly supported by core.

It seems like there’s general agreement on this. I really don’t have a good grip on how much harder it would be to make something like this work from within (i.e. it was clearly difficult for Velocity to do it from without).

It’s a fool’s errand, whether it’s supported in core or not. Bigger applications have test runs that take 6 to 12 hours or more. The solution isn’t some magic parallelization that will keep build times under 60 seconds; but smart integration with the code repository, so teams can have situational awareness of where builds and branches are at in the testing process.

I’ve had similar experiences with testing setups going down the drain, so I agree that while it shouldn’t be an issue, it often is. No argument there. GitHub’s Status API is, by far, the best (free) solution currently available that provides a workable solution to manage the issue. Also, existing CI services offer far better parallelization than anything we’re going to cook up internally.

General agreement by whom? The two or three individuals who declared they were the only ones who were allowed to contribute to Velocity core? General agreement by a team who wants to write their own CI server?

In the broader Node community, it’s less ‘generally agreed’ upon; as exemplified by the Nightwatch community. This issue is about ‘generally agreed’ upon as whether Blaze, Angular, or React is the best UI framework.

To be clear the main topic of this conversation isn’t parallelizing builds (which I list as a “nice to have”) but simply providing a way to write (unit/integration) tests against parts of your application that aren’t in packages. The primary purpose being to allow people to write code built off ES2015 modules and still write tests.

But to your point, even if a set of acceptance tests take many hours to run, there’s a case to be made to run a smaller, faster suite of unit/integration tests as you develop. I think some people do this already with the meteor test-packages command. I don’t think it’d be a bad idea if you could do the same with test app, and the faster it is, the better, right?

I’m honestly not sure what you mean by “this issue” here. In the text that you quoted, I’m talking about making test build times faster by sharing built resources with the main application build process. I can’t really see why anyone wouldn’t agree that that’s a good idea; to me, the question is just how important it is.

2 Likes

Tom,
Thanks for this article!

As far as next steps go, it sounds like you are on the right track.

We spent about a month with velocity, but found it was too slow to use for any kind of day to day testing during development. The build output sharing is interesting to us, and would mitigate one of the major drawbacks of testing a large app continuously/frequently during active development. We’ve see multiple node instances running at 100%, spinning up fans every time we saved a file. We have a fairly big app, with something like 700 html + js files.

We’d like to see sometime in the future good support for testing hybrid apps (e.g. with Appium) in the official test framework. It should be something that allows testing native cordova plugin integrations as well as mobile specific UI.

1 Like