Velocity and the state or testing in Meteor


#1

Hi,

Right now there is one thing that is stopping me from taking Meteor as a serious option for production grade development, rather than just PoC/hackathon fodder: testing.

Meteor’s testing story seems like an afterthought. Velocity is the supposedly official answer, and whilst it is a lot better than nothing it isn’t exactly great either. I think this has more to do with Meteor than Velocity and I am grateful to the people who built Velocity, but it has been a frustrating experience using it.

There are two fundamental problems: slow feedback, and magic.

Slow feedback in unit testing is the death of developer productivity. I recently wrote ca 65 tests, mostly repetitions on a few patterns, to make sure some moderately complicated allow/deny/subscribe logic for three collections was robust and wouldn’t leak data to unintended users.

It took two whole days.

When velocity/jasmine boots up I have three instances of meteor running. My 2-year old i7 Mac is struggling. The app isn’t trivial and the boot up time is a minute or so (mostly due to using NPM and webpack). Each change in a test takes 30-60 seconds to manifest. This is too slow to hold a developer’s attention and for experimental “why isn’t this test passing dammit” type rapid tweaks it is so frustrating I nearly gave up.

It is also confusing. Some things log in the main browser console where my app apparently has to run (client integration tests). Some things log in the main terminal where I booted up meteor. Some log in the mirror’s log file that I have to tail. And I don’t even know where I’d attach a break point debugger to step through the tests or any server side they call.

The approach to fixture management also seems quite primitive. Creating test-only methods that insecurely set up test data is scary (what if I accidentally leak them to production?) and clunky.

All of that means a lot of time figuring out what’s even going on rather than writing tests.

The second problem is magic. So much weird stuff is happening when an integration test runs that it’s very hard to make sense of.

I have been using the recently released third party webpack integration, and for reasons that neither I nor its author can understand you end up having to save each file twice for the test runner to pick up changes so you get false positives and negatives all over then place. It’s like playing Cluedo.

This likely isn’t Velocity’s fault, but the fact that it has to do so much magic means it is very hard to figure out what happens when things go wrong. I also have flapping tests that fail intermittently and I just ignore them now, which is a super dangerous attitude to have.

I know it is really hard to write a robust, simple testing framework for a batteries-included app platform (Google “plone .app.testing”…). However, as a community we need to find a way to get to tests to be simple, fast and easy to understand.

Things I’d look at:

  • the default mode should be to run in a separate process that doesn’t boot up all of meteor, and mocks the relevant bits of the framework

  • don’t maintain any state between test runs

  • write fixtures once and in the same place as the test

  • expect people to write 80% unit tests (fast feedback) and 20% end to end tests (comprehensive integration)

  • only start up the bits of testing machinery required to run the immediate set of tests, no more

  • make it really easy to attach a step debugger in tests

Martin