It seems with Velocity meta-project we have many options of testing sub-frameworks, but myself and most people I talk to always hit bugs trying to actually use them.
What is the most stable framework currently?
Is there a way to run for example super basic server unit tests reliably?
Nothing esoteric.
The splintering is really a shame. I know there is some differentiation, but there’s also a ton of overlap. Some kind of poll so that we could focus efforts and get to one framework that actually works well may help move things ahead?
Great question. We built ours on BrowserStack, which uses selenium and phantomjs, and allows cross-browser, cross-OS, screenshot capture, etc.
Not exactly unit testing like you’ve described, but it’s what we use so we can ‘black box’ our server. If we can’t play as a guest, or we can’t log in, using the UI (meaning unit tests are not allowed to do ‘reach in and twiddle’ testing - they can’t send ‘click’ events directly to DIVs by ID or whatever), then we want to know that. We also are more concerned with browser size, older browsers, mac vs. windows issues, etc. than pure ‘did the function work as expected’.
Good question, and I have often felt frustrated at first too. I spent 3 days, a couple months back, just trying to figure out how to get Cucumber to work on the CI server. While it was very painful, these sacrifices and investigations, and resulting pull requests are important for the ecosystem.
Personally, I focus on Xolvio:Cucumber as my first place of attack. I think the idea of end-to-end testing written with english style specifications. It forces the developer to think like a user. I also have watched both Sam and Jonas at work, and I really respect their management of both Velocity and their respective frameworks. It involves both a technical prowess and good documentation and examples to really get a testing culture to take off, imo. I think you will be really interested in the upcoming Meteor Club Podcast episode 3 in two weeks.
I also think clinical:nightwatch is no longer being maintained and has moved on to be part of the starry-night command line tool now.
Anyway, this stuff will continue to be a struggle for a while, and then it will become easier.
A few weeks ago I spend a couple of days trying to understood which framework to use.
For unit and integrations test you can go with mike:mocha and sanjo:jasmine. Both are ok and works well. I decided to go with mocha as I like it’s syntax and features more.
With functional (end-to-end) testing was harder.
At first I tried to use mike:mocha together with xolvio:webdriver to run selenium test with mocha. I liked how tests works, but it takes time a lot of time to configure it and it’s super slow and unpredictable (at least for me).
clinical:nightwatch was even harder to configure, but I like it’s way of writing tests most.
rsbatech:robotframework seems to be not so popular so I decided event not try it. Especially after looking on configuration instruction.
Finally I stay with xolvio:cucumber. After last package upgrades everything works our of the box. Just do meteor add xolvio:cucumber, run your app, click on Example cucumber tests and you’re good to go. First tests already run for you
@maxhodges Nice poll indeed! You should also look at the number of installs on atmosphere.
@dcsan The latest velocity has added significant stability, specifically for mirrors. We’re eager to see what the bugs are with the latest release so don’t be shy
For server unit tests (true isolated unit testing), you only have one option and that’s Jasmine. If you’re just looking to run tests on the sever, then you can also use mocha too.
FWIW, Jonas and I are building our own product (soon to be announced) using Cucumber and Jasmine, which means we’re seeing the bugs first hand and fixing them as we go. Also, the book I’m writing uses Jasmine and Cucumber - the reason being it supports the 7 testing modes of Meteor.
If you really want stuff to get stable, just chose a framework and come chat with us so we can help. We can do a hangout with you and support that way until we get all these kinks ironed out. We haven’t really got a QA team so that’s where we need users to help. The more bugs are reported, and the more responsiveness framework authors are, the less bugs the frameworks will have. Help us help you
All the configuration issues that clinical:nightwatch had were basically centered around trying to install via the package management system, integrating with Velocity, and refactoring the bash installation scripts into javascript. Now that Nightwatch is using a node-based CLI architecture, everything is working like a breeze.
Can’t speak for the other frameworks, but StarryNight is the result of over 400,000 some-odd tests that have been run on Nightwatch, via the clinical:framework and velocity:nightwatch-framework packages. It’s been used in 4 different business that I’ve been associated with, and I’ve managed to process probably over $250,000+ worth of time/code through it over the past 12 months.
StarryNight currently supports all of the advanced Nightwatch features, such as custom commands and custom assertions. I’m currently working on adding all the new experimental features from 0.6 to the StarryNight samples and scaffolds, including chai syntax and server-side unit testing.
Now that configuration and launching has been taken care of via the CLI utility, I’m also rolling into Meteor specific customizations, such as meteor app scaffolding, Blaze component design, extracting tests from components, refactoring tests, etc.
tl;dr - I was as tired as everybody else with hitting bugs trying to use the Velocity meta-project, so I decided to spin Nightwatch off into a separate utility. If you like Nightwatch’s way of writing tests, you’ll find a lot to like with StarryNight.
Very interested in finding this out myself. My testing has fallen quite far behind recently because, tbh, I haven’t really found a framework to my liking. I’ve been using the velocity application but it seems very… unfinished.
For example, when I use meteor shell, it connects to the test mirror, and when I use meteor mongo, it connects to my real application. On top of that, whenever I delete a file, all the tests freak out and I have to find the test in the mongo database and manually remove it.
It could be really good in the future, but for the time being, seems like it’s very much in development and not ready for primetime.
@corvid both the big issues you mentioned are now gone in 0.6.0 as @chompomonim noted. You can now connect to the main app with the shell normally, and you can even connect a debugger to the mirror. Each mirror gets its own separate log that you can tail.
The deleting of files issue has been resolved also. The test proxy (culprit) has been removed and the mirrors now use the super stable Meteor internals instead of our own file watching. All the frameworks have been updated to use the new Velocity so I would recommend giving it another try and letting us know what you think.
To answer the original question a little better, the choices for Velocity frameworks today are:
(jasmine XOR mocha) AND optionally (cucumber XOR robotframework)
Stability wise. The Velocity core is where the issues were and the choice of the framework is down to the features you need / want.
Since this thread is about where Velocity is, I think it’s due to say where it’s going too. On the roadmap we have the following:
Combined test-coverage reporting - Shows you exactly what lines of code have test coverage from the combination of your test frameworks. This previously worked in an earlier version and we know what needs to be done to make it work.
Parallel testing - you may have seen the video where we run e2e tests that normally take 80 seconds in around 4!
Stability and efficiency - on-going and we have the support of the MDG
We are discovering a lot about testing Meteor applications in the process of writing Velocity and related frameworks. We’ve come a long way and are getting closer and closer to 1.0. We need both the good and bad experiences of real-world users to help get us there so we would love your help (and anyone else reading this) to get us there by working with us to iron our your issues.
For me this last point is what it’s really about - rallying up the support of both the contributors and users to make Velocity and related frameworks more and more awesome.
The Meteor testing landscape is very fragmented … For my production app Prisma I split most functionality into separate packages and test each using a combination of unit / integration tests with practicalmeteor:munit which is based on tinytest. It lacks some advanced features from mocha but is very stable and is perfectly integrated with Meteor (e.g: meteor test-packages ./). It also comes bundled with sinon and chai.
For acceptance tests I wrote my own package space:pioneer which integrates the absolutely awesome pioneerjs testing framework with Meteor. Pioneer is based on cucumber-js and provides some extra sugar related to DOM handling and consistently uses Promises which makes your acceptance tests very clean.
What I achieved with space:pioneer is that you have access to your running Meteor server code in your acceptance tests (cucumber step definitions). This means you can actually interact with you whole app, create users, fixtures etc. and stub email sending etc. so that you can test if your app behaves correctly.
The only thing that I didn’t setup for space:pioneer yet is proper support for CI to run it on travis or codeship for example. But if you want to give it a try just send me a message and we can make it happen
I tried to use velocity and even wanted to integrate space:pioneer with it but I failed miserably because the whole mirror thing in velocity made it horrible to debug stuff and build stable testing systems. At the end of the day I am also not sure what the benefit of the velocity UI is, I see my cucumber output in the console and this is sufficient.
Yup, sounds like you ran into all the same problems I did trying to integrate Nightwatch with Velocity.
I tried to tell the team that the mirrors were an architectural problem for some frameworks, and tried to commit code that would have helped with frameworks like space:pioneer (which is similar to the abandoned velocity:nightwatch-framework package). But my contributions were aggressively voted out because I was the minority voice in the group.
I’ve pivoted towards a similar solution - munit tests and nightwatch to connect to selenium. Got travis working by writing a command line utility to launch nightwatch for me. If you’d like, I could easily add pioneer support to starrynight for running on travis.
(You’ve done a lot of nice things with space:pioneer, and I may use a few tricks from it if/when I ever need to drop nightwach in via the package system again.)
ps. The UI was originally from a use case for doing white-paper style reporting for regulatory approval. Along the way, it had everything-and-the-kitchen sink added in. There was an attempt to start forking it and making streamlined versions to make it more useful, but those changes were voted out.
Yeah, sounds pretty similar. My problem is also that I want to have full control over which tests are running at any time. Because most of the time I am just changing a very small part of the app and don’t want any other tests during development.
Also I find it strange to start with a testing UI when the real use case for most tests on this planet is to report either 0 or 1 + output at the end of a test run for the CI
I think pioneer support would be good in general, as it has some very smart concepts. But I think for my purpose I will just provide a simple config option to tell space:pioneer to kill the process after the tests ran. This should be enough to make it CI compliant.
Pioneer is very cool indeed, especially the widget helpers.
I actually came across your framework previously but I had no idea that you tried to integrate it with Velocity. We all would have been more than happy to help and of course still are
Let me demystify mirrors a little. If you don’t request one, it has 0 affect on your framework. There is nothing in Velocity that forces a framework to use a mirror.
Previously there were many issues with syncing and 0.6.0 has really fixed those. What a mirror really is, is a test environment. As a framework author you can choose to run your tests against the main app, or against a mirror. We’ve really come a long way in understanding how to create and manage mirrors. You can now attach debuggers to mirrors, tail their logs individually and control what code is used in the mirror, like the fixtures you mentioned you do. It’s actually working incredibly well now so I would love to work with you integrating space:pioneer with Velocity.
Regarding the UI tests rerunning on changes, you are absolutely correct, it is pointless to rerun all if them since it takes way to long. This is feedback I’ve received from many people about xolvio:cucumber. So I’m actually working on a solution for this right now. By default, saving a file will not rerun cucumber, instead only scenarios that are tagged with @dev well be run. This allows the developer to focus on one scenario/feature at a time.
The other thing I’d like to work on is using test coverage to know which code files affect which tests. This will allow us to know exactly which tests to rerun. And now add parallel mirrors to this fearure and it becomes not feasible to get quick feedback.
Regarding the html reporter, that is actually optional. The framework author can choose not to include it by default, like sanjo:jasmine does. There is a console reporter that you can use today if you prefer feedback the way most people on the planet do
Thank you for sharing your framework and thoughts. Hearing this sort of feedback is invaluable. I’ll be keeping an eye on space:pioneer and would love to chat and bounce some ideas around. As a fellow Cucumber BDD’er, we should definitely talk! Sanjo and I just got mobile testing with Cordova and appium all automagically wired up and we’re also working on a startup that is all about Cucumber and BDD
I personally found rsbatech:robotframework the most stable and functional (it also had a friendly syntax+screenshot support which i liked) making it my test framework of choice. One downside is that this isn’t a broadly used framework so examples can be a bit difficult to come by.
I did also like clinical:nightwatch (now deprecated by Abigail) because I wasn’t crazy about having my acceptance tests constantly re-running as I was actively coding. The auto re-run approach seems to work much better for unit vs. acceptance tests. Once we can have dozens of velocity mirrors this may not be an issue but we are not there yet.
sanjo:jasmine and mike:mocha - both seemed to have a number of stability issues i ran into: velocity mirrors randomly being lost (console would be spammed by connection attempts), various unit tests would fail to execute, stubs missing for third party packages, phantomjs support didn’t seem to work right, etc. There is some good work going on here - I just don’t think these are stable enough for daily use yet.
xolvio:cucumber - I tried this next after sanjo and mike packages but as a result of personal preference was just not crazy about the gherkin syntax. Package itself works well and Sam has some good examples available.
Have you tried sanjo:jasmine 0.13.x? I just ask, so that I know if this is still your up-to-data experience with it. We have fixed most of the big annoyances with velocity:core 0.6.