End-to-end testing

I have a challenge in writing complex tests, and looking for suggestions on how to approach it.

Scenario

I have a workflow process that needs to be tested, and I want to check the functioning of the workflow engine. This will consist of opening and completing a sequence of workflow steps, and checking that it works along the way.

Problem

The standard meteor/mocha testing allows me to write a describe() block, and it() blocks within it. Each it() block needs to be executed in order, as each one depends on the previous one. Mocha runs each it() block independently, and I can’t work out how to prevent that. I tried nesting more describe() blocks within, and also tried using the done() callback.

I can make a large it() block with all of the steps in it, but in the event of failure, it’s hard to tell exactly where the problem is.

I was wondering if switching to Jest might help (I’d like to be able to use Jest for testing instead of Mocha - has anyone looked into how hard it would be to provide a package for that?)

Are there other tools I should use? I can write these complex tests in regular old JS, but I’d like to sit within a testing framework, as it does all the reporting and integrates with CI nicely.

I know that Mocha recommends writing atomic repeatable tests with no external dependencies, so I understand that am breaking that rule, but I need to do this level of testing somehow. Any suggestions are welcome.

1 Like

Jest is still a struggle with Meteor and I’d Love to have the same wrapper package like mocha has.

Regarding the engine: is it bpmn by any chance?

This is also the challenge we are facing with cypress. But the good thing about cypress is the screenshot at the time of the error. The screenshot plus the test command where the test fails can help guide us about the error and what to fix

Yes, we use Cypress as well, and I agree the screenshots are a huge help in seeing the problem and being able to fix it.

The testing I am doing is strictly server based. Cypress won’t really help there.

One approach I have thought of is to do this:

Write a script to do the testing, ie drive the workflow through from step to step, and record the results in a json object. The Mocha tests could then look through the JSON data looking for indications of success. This way I can do detailed checking after the fact, not constrained by the ceremony that Mocha imposes. The report can give me blow-by-blow information about what happened and where it went wrong.
It also acts as a record of what transpired (something like what Cypress does with the screen shots)

I would also love a jest wrapper package

The engine isn’t bpmn, but I am using Camunda modeler to draw the diagrams - these are for the business to be able to see and understand what we are building.

I am also generating documentation as markdown, and put that into Gatsby for viewing. I could consider fitting in with the bpmn standards. It sounds like you might have an interest in that - we could have a chat about it ?

With bpmn you have the advantage that most engines are capable of snapshots of the state an return to the state, which allows to test processes very detailed.