Jasmine, Karma, Enzyme for React/Redux test focused workflow

TL:DR I really like my workflow of Jasmine, Karma, Enzyme integration/unit tests for React/Redux testing and you should consider it too for projects longer than a weekend hack.

Over the last month, I’ve been researching unit tests as my project grows and becomes harder to maintain/debug. Before then, I’ve just been doing manual testing and relying on my ‘awesome’ javascript skillz to not break the code :stuck_out_tongue:

This has been my third foray into unit testing over the last decade after the first two failed attempts. I’ve been a believer in end-to-end happy path automated regression tests, just not unit tests.

In the last week, I’ve written 1k lines of test code, increasing my codebase by 50% from 2k to 3k LoC.

The setup:

  • Karma test runner with webpack allows for writing tests in ES6
  • Enzyme gives me shallow(<Component {...props}/>) rendering which means I test each component separately.
  • Jasmine takes care of everything Mocha, Chai, Sinon gives me
  • Everything in the tests folder and watched by Karma on file change
  • React pure components is very easy to test. All inputs go through the props, there are no other dependencies, and I use Enzyme to look up the html output.
  • Redux means I don’t use any react component state and I can test close to 100% of my business logic with very little use of test doubles, spies, mocks, stubs etc.

What it gives me:

  • I currently have 86 tests over 2 pages of my app. These run in 0.15 seconds from the moment I save a file change.
  • Most changes I make is picked up by a test breakage.
  • These breakages allows me to quickly find the code that’s broken and from the name of the test that broke, I almost always know what broke and why. This has allowed me to really speed up my development cycle because I used to have to go by whatever threw a javascript exception and then start debugging in chrome dev console. This was very time consuming.
  • It’s really helped me to clean up my functions, components, modules and minimise API surface between modules. This is because whenever I write a test, it becomes obvious when parameters to functions should not be necessary or even functions themselves so it’s helped me clean up my design alot!
  • Fuzzy happy feelings when I see my tests pass and the confidence to go crazy on my refactoring. This was a major reason for writing the tests in the first place. Not the fuzzy feelings, the ability to refactor with confidence.
  • Every bug I find, I first write a test for it, prove it fails, then implement the feature and see the test pass. Then I do manual testing afterwards just for self gratification as it almost always passes.

I work on a 80s game remake.
My tests can be found in the tests directory.
Example of a in-depth stateless component test.
Example of game logic test.

I haven’t started testing async actions from Meteor.calls or server-side interactions, that will be another stepping stone to TTD (Total Test Domination)!

My method for testing isn’t pure unit test. It started off that way but I found that it was too artificial to test my reducers, actions and mapState/Dispatch separately so what I do is setup a store, dispatch actions until I get the state that I need for the component under question and then test the component. The examples above show this. In practice, it it a bit more integration testing but I’d argue that for a React/Redux app, your store/state/component is one integrated unit and should be tested as such.

To make sure that the actions and reducers work as expected, I test these two together without components so any logic issues in the actions/reducers will be picked up by both component testing and action/reducer testing.

I’d be happy to hear how others are testing their code, whether people prefer end-to-end tests and if anyone actually had the time to look at mine, any suggestions for improvement would be well received! After all, I’ve only been doing this for a month.


Did you ever check out http://wallabyjs.com/ ? It’s paid, but it’s really really really cool. And also shows (real time) test coverage.

1 Like

Another vote for wallaby, such a massive productivity boost !

@gadicc @vjau are you both using wallabyjs or have used it compared to another test runner like karma? I’m tempted to try it out but the two things holding me back right now are:

  1. The main selling point seems to be speed (running changed tests) and pinpointing which tests fail. I only write one test at a time, so it is very clear which ones fail. I also name my suites/test such that it’s pretty clear what’s going on (e.g Suite: <PackComponent>, test: it should not allow packs to be dropped into itself. And currently, with about 90 tests now, they take 0.15 secs to run. So neither points are particularly enticing. Even at 900 tests, if it’s a linear relationship, it’ll be 1.5 secs to run so that’s not a problem for me.

  2. The configuration required to make it work with my current flow. Karma.conf.js does all the webpack, chrome, jasmine configurations and I spent a day to get this working. I’m not seeing enough benefits to be worth putting this time into changing tooling again unless you guys have experience with other test runners and this one is alot better?

tbh the price did put me off but thinking about it more, I realise my time is much more scarce.
With my current workflow, I just put in 4 more tests today, refactored a whole bunch of code and added a few more features, so I agree that unit testing is awesome, but what’s the diff between wallaby and karma?

@mordrax, it’s not really a replacement for karma (or mocha, etc), rather a complement. Those test runners run all your tests. Wallaby just runs your tests for the current file (and even function, I think) you’re working on, in real time, with visual feedback. It means if you break a test with a single keystroke, you’ll immediately get a red square appear on that line (with a link to the exact test it breaks, in your editor).

This saves a lot of time because usually it’s finish up on my current piece of work (sometime after the exact change that broke something), save, hope that I remember to run tests or look in test window if it’s auto watching, see a test is broken, find the test, find the code, figure out what’s going on, fix, rerun tests to check, and repeat. The jump you described (in time & pain) going from hunting thrown errors with the debugger to more instant feedback with tests reminded me a lot about going from regular tests to wallaby tests :>

You hit the nail on the head in that our wasted time is more expensive than the purchase cost.


Oh, I guess I should admit that I’m currently working on two newish projects which don’t have proper testing yet. But I’m involved in / have been involved in projects using both karma and mocha, and whenever I had the opportunity to add wallaby into the mix, it was a big step up.

I’ve never had a problem getting it working, the setup is very similar to karma, in terms of file specs, etc. I was using something like the one from mantra-sample-blog-app and there are more advanced examples in https://github.com/xolvio/automated-testing-best-practices (with webpack, for meteor) - there are also some glowing words for wallaby there too.


Damn, I really wanted to get some functionality done tomorrow… if you recon it’s that good, I might just have to check it out in my toilet breaks!

Thanks for the xolvio link again, it’s in my pocket list of things to read cept I haven’t read any of them for a while… :stuck_out_tongue:

I wish there was a testing best practices like this for using react/redux. I’ve come to realise that using redux basically cuts me off from the majority of the resources available with Meteor and the content produced for this forum.

1 Like

A couple of other other brilliants thing about wallaby type plugins:

If the code is failing it shows you why right next to the code, on the line that is actually failing, along with the values of the variables

And, you can instantly see if you’ve missed any parts of your code in your tests. So, say you have a switch statement, you’ll see green dots next to the lines that are tested, but if you forgot to test one of the cases there will be a grey dot against those lines.

A video helps here: https://youtu.be/uUmF16R9JNs

1 Like

Haha, definitely worth checking out, but maybe finish what you’re working on first :> But yeah, it’s pretty amazing. @tomRedox mentioned a few other key selling points, and yes ,definitely worth watching that video!

Well, I’m using react and redux… agree good patterns are scarce, I’m figuring out a few myself, hope to contribute back once things have solidified over a few projects. It’s a fair amount of extra work but I love the pattern and the dev tools with time travel, etc.

Nice post! I read that in one breath, I absolutely feel the same sentiments when I see passing tests and the confidence of refactoring! But I’m new to Meteor and I came here to see how people are testing Redux and Async on Chimp. Been stuck for hours! My tests pass on chimp --watch but once I remove chimp --watch it opens a browser and automatically closes it and throws bunch of errors… :frowning:

@jzarzoso need to know more about your use-case. Can you give more info? Perhaps in a new thread?

You can also try the xolv.io/community for answers

Hi @sam I actually solved it using chimp. I was tied to chimp because I was using Meteor Chef. I just needed to add @watch to the tests. My use-case is I needed to test my Reducers and Action Creators, I feel that chimp is too much for this use case so I was hoping if I can do it with practical:mocha. I will go back to chimp once I’m ready to do a full app test.

1 Like