Testing a Meteor Package with Velocity / Tinytest

Hi, currently i test a package with meteor test-packages ./ and it works, but I’d like to have the tests results as output in the console (to integrate with Travis CI).

I tried to do it with the new --velocity but it doesn’t seem to work or maybe I missed something.

Is possibile to have the Tinytest output directly in the console?

meteor test-packages  --velocity ./                                                                                            ⏎
[[[[[ Tests ]]]]]                             

=> Started proxy.                             
=> Started MongoDB.                           
=> Started your app.                          

=> App running at: http://localhost:3000/
failed to subscribe to VelocityTestReports subscription
failed to subscribe to VelocityAggregateReports subscription
failed to subscribe to VelocityMirrors subscription { [Error: Subscription not found [404]]
error: 404,
reason: 'Subscription not found',
details: undefined,
message: 'Subscription not found [404]',
errorType: 'Meteor.Error' }
2 Likes

Velocity has currently no support for Tinytest.
You can use https://github.com/practicalmeteor/spacejam to run your Tinytest tests from the CLI.

thanks @Sanjo I’ll try it, because I wanted to integrate Travis CI with meteor-babel and then move it to Autopublish. If @splendido and @dandv want it :smile:

Sure @grigio!
you’ll be the first tester for the new autopublish workflow! :wink:

Another answer is that @arunoda made this boilerplate .travis.yml file that pulls down a few scripts and basically does what spacejam does, which is run your tinytests in phantomjs

https://meteorhacks.com/travis-ci-support-for-meteor-packages.html

1 Like

Also, since I have some of the Velocity members in this thread- what is Velocity’s approach to testing packages if it doesn’t support tinytest? Also, what’s the scoop - it looked like tinytest was supported for a while https://github.com/numtel/velocity-tinytest, but then Velocity APIs changed and this was abandoned ?

I don’t want to be ‘one of those’ OSS pundits, but personally I feel it’s more important to nail the package testing (including in CI) story than the app-code story. It’s also strange to begin a testing framework and not do something with the existing package framework that existed at the time of it’s birth. If Velocity doesn’t support package testing, I respect that it has its own focus, but if anyone could give some back story that’d be great.

Back story is basically that folks were focusing on areas other than package testing, since TinyTest was already available.

I’ve actually spent the better part of this entire weekend researching the current state of getting TinyTest onto Travis. My results so far:

  • Velocity currently doesn’t support TinyTest (or mUnit or SpaceJam, as far as I’m aware.).

  • Arunoda’s .travis.yml file is from ~0.6.3 days, and relies on meteorite, so I highly doubt it still works.

  • Spacejam seems to be a working solution and the best solution available, but I’ve found it to be sluggish and inconsistent in spinning up phantomjs and running the tests. About half the time it just completely hangs. So I’ve been doing further research.

  • Spacejam relies on the test-in-console package which ships with meteor. It relies on the following undocumented meteor test-packages --driver-package test-in-console syntax, which sets things up for a phantomjs run. A runner is provided for phantomjs to consume as part of the package. See the run.sh file for an example how it’s all put together.

  • Problem is, it seems to reference the default meteor installation and is intended to be run on a meteor checkout, rather than from an app. Which is the reason that SpaceJam got written in the first place, I think… so test-in-console could be used with packages in apps.

  • However, in issue #1884 it appears that @mitar may have gotten test-in-console working with Travis. Hopefully maybe mitar can chime in?

I’m on the verge of forking SpaceJam, converting it from coffeescript to javascript, and seeing if I can work on some of the stability issues.

1 Like

We are running tests successfully on Travis CI.

Arunoda’s script was updated to new Meteor some time ago as well: https://github.com/arunoda/travis-ci-meteor-packages

We have also started using CircleCI and have test runner for it as well:

This is the content of circle.yml:

checkout:
  post:
    - git submodule sync
    - git submodule update --init --recursive
machine:
  node:
    version: 0.10.28
  pre:
    - curl https://install.meteor.com | /bin/sh
dependencies:
  override:
    - npm install selenium-webdriver
    - meteor list
test:
  override:
    - tests/test-all.sh

So I think things are pretty good for testing. So only frontend testing is something I have not yet figured out. But unit testing is working. We also made a nice wrapper around Tinytest which makes things ever easier:

You can have now tests which have to run a bit on server, a bit on client, and then test results. It really makes things easy for common tasks where you have to do both sides.

1 Like

Also, to run tests in the console, you simply run:

meteor test-packages --once --driver-package 'test-in-console' package-name

Then you open tests in the browser and observe web console. You will see output there. So the output does not go to the node.js console, but to the web console.

2 Likes

Aaah! You’re fantastic, mitar! Thank you much! :slight_smile:

I didn’t expect that test-in-console would go to the browser console. Interesting bit of functionality, and glad to have it in the toolkit… just not exactly what I was expecting.

This really helps, and gives me that second (and third) example to triangulate with. I’m going to see what I can do about adding all of this into my current work. Super helpful!

Digging into this a bit this morning, and I think I’m understanding that runner.js file a bit better now. If I’m reading this correctly, the intent isn’t to have phantomjs actually running the tests. It’s to have phantomjs read the results in the web browser console, and report them back to the server command line! Clever!

Yes, thanks @mitar, for that information. I’ve been under the impression that --once means to run tests and provide an exit code. That’s been the answer I’ve been given when I ask how to test in a CI environment, which is now shown to be incorrect. What is the purpose of the --once flag - just to bypass reactive rerunning?

In any event, it does seem that the scripts in https://github.com/tozd/meteor-test-runner actually do provide what I’m looking for, and for every local package in a ./packages/ directory, I can get a process that exits non-zero when any of those fail.

@awatson1978 I’ll look to this thread to see what else you can do with this - I’m glad the fog is starting to clear!

This also has led me to view the tinytest sourcecode for the first time: https://github.com/meteor/meteor/blob/devel/packages/tinytest/tinytest.js

I see no reason this file could not be forked, in order to create a new runner that is purely independent of any DOM, and therefore in no need of PhantomJS parsing, browser consoles etc… Get rid of the machinery - simplify!!

1 Like

the intent isn’t to have phantomjs actually running the tests. It’s to
have phantomjs read the results in the web browser console, and report
them back to the server command line! Clever!

No, it both runs the tests and reports them to the console. So on Travis CI you are mostly stuck with PhantomJS, so you both use it to run the tests and wait for browser console to get results, which you then output to the terminal.

What is the purpose of the --once flag - just to bypass reactive rerunning?

Just that if there is an error compiling something it fails immediately and does not get stuck in “wait for file change”. So this is to catch errors which prevent Meteor to even start the tests.

and for every local package in a ./packages/ directory, I can get a process that exits non-zero when any of those fail.

Yes, by default tests are run for all packages in parallel. But this really does not work well in practice because there are often too many interactions between tests and then you have random test failures. So this is why we force running tests serially.

Okay, just extended StarryNight to include running TinyTest packages and to report back to the command console. All you need to do is update to 0.2.0 and run the following command:

starrynight run-framework tinytest-ci

There’s also experimental support for SpaceJam (and a three or four other frameworks), but it was stalling, and it was just easier to port/rewrite the core functionality than try to patch up the SpaceJam utility.

I’m fairly sure it should work with pretty much any continuous integration server. I’m not 100% sure about all the exit codes edge cases, however, so if it stalls on the CI server after running the script, let me know, and I’ll put in a hotfix.

But yeah, this all fell into place rather nicely. Compared to the pipeline necessary to get Selenium working, this was pretty straight forward.

Regarding getting rid of phantomjs altogether - personally, I’m prone to simplify by replacing phantomjs with the nightwatch/firefox infrastructure, because Nightwatch would then be able to manage end-to-end, unit, and package testing. I’d really like to have a single runner that can handle all of that.

If the goal is simply to do package testing as light-weight as possible, and get results to the server console, then yeah… a DDP connection might be much more convenient than using phantomjs.

Wow. I am excited to look at this- already told a few of my Chicago Meteor peeps… Thanks Abigail @awatson1978!

You can now also test packages with Velocity using sanjo:jasmine or mike:mocha-package. It also uses meteor test-packages. You can find more info about that here: https://github.com/Sanjo/meteor-jasmine#testing-an-application.