I’ve been fighting for the last 5 hours to find a way to test my publications. In particular, I wanted to test what happens with more complex publications, where a cursor is being observed.
There are a couple of undocumented features that have to come together in order to do this:
makeTestConnection() and _afterUpdateCallbacks.
var cursor = Meteor.server.publish_handlers.pages.apply({userId: '123'});
I would love to see some syntax sugar built-in to Meteor when in testing mode, to isolate publications / methods and other SUT’s hiding in nooks and crannies.
Diving into testing publications is on my list of things to do. There was a thread about this awhile ago using Gagarin (which I am using for normal testing) for testing. In particular, this post:
We’ve been having good success with Gagarin and Arunoda’s hackpad recipe. We used it in the clinical:collaborations package to extract a rather gnarly graph-walking algorithm from an application security layer into it’s own package. (The algorithm filters publications based on a graph walk of records in the collaboration collection.) Here are some sample test files that extend Arunoda’s hackpad:
It’s not perfect. Teardown of collections between mocha tests doesn’t seem to work (although it does between test script files). Before() and beforeEach() have some occasional timing hiccups. Etc.
We’re been using version 0.4.11, since it’s the stable version for Meteor@1.1.0.3. The newer version that came out in December may have a bunch of the issues resolved; but it requires a bunch of packages that break our build, so we’ve been slow to upgrade.
The package itself is still pretty beta; I’m hoping someone in the community will become enthused with it and polish it up and publish it to atmosphere as a package
But I don’t think the tool needs to do anything for you to make such techniques possible; it’s really just a matter of the right packages existing in the community for this stuff!
PublicationCollector looks like a great way to do it. Do you also have a way to block execution until the publication’s results are available to the subscription/collector, or is this somehow not needed anymore?
Is it easy to write tests where you insert data after you’ve subscribed, wait for it to propagate, and observe if it arrived on the client-side?
But I don’t think the tool needs to do anything for you to make such techniques possible; it’s really just a matter of the right packages existing in the community for this stuff!
In my experience, it’s much harder to write unit tests if the code to-be-tested isn’t written with testing in mind. For example, it’s much easier to test a class that accepts it’s external APIs as parameters. Or, in this specific case, it’s a lot harder to write tests for publications without well-documented hooks to control execution of DDP. Another example: registering methods globally necessitates passing random Ids from tests in order to write self-contained unit tests.
In my mind, there are two levels for testing publications. Unit and integration.
The code I mentioned above isolates just the publication unit, which is right level to test any logic inside the publication code without any noise from the dependencies.
When collecting/subscribing, it becomes an integration test which is useful for testing configuration and the combinatorial effect of other units like allow/deny rules. It looks like the Collector approach is aimed at this use-case.
Could you say a little about the difference between using the collector approach vs a normal Meteor subscription inside an integration test, or even the oortcloud/node-ddp-client from a tool that runs outside the Meteor app?
Seems reasonable. Once we walk the reference apps from from 1.1.0.3 to 1.2 next month, we can start layering in use of publication-collector. It doesn’t particularly impact the kinds of tests we’re trying to run; so whichever. We can adopt that. It will simplify some of the tests.
I’d be prone to renaming it something to indicate that it’s slightly downstream of the publication event; but not yet on the client. Maybe something along the lines of:
I should note that I’m consulting with a couple of businesses who have independently reached the same solution of using client-side observeChanges hooks to make AJAX calls to TrueVault servers for Protected Health Information (PHI), and then merging data in. Some use null collections; others don’t.
So, while publication-collector may simplify things and provide a convenient way to test publications on the server, it doesn’t test the full publication/subscription round-robin. And people definitely use the entire round-robin, so there’s still the need to have server.execute() and client.execute() so that people can specify complex testing scenarios.
Also, to make things symmetrical, if we’re going to have a publications-outbound-endpoint package, it would make sense to have a subscriptions-inbound-endpoint package.
I guess in my mind it made sense that it was collecting the output of the publication. The other thing I was thinking this code base could be used for would be a building block for something like Sashko’s simple:rest (after all this code is taken from that package, with tweaks). I guess another line of thought for naming could be “running the publication in a sandbox”. Not quite sure what name comes out of that though.
No definitely not. This is for unit testing publications. I suspect this is all people need in the majority of cases though.
Yeah, I think the correct name here would be a “publication mock”. This is something I’ve thought about too but not yet tried to implement; we have toyed with a “stub-collections” package (which is mis-named, it should be called “collection mocks”)
If you are using stub-collections, it makes sense on the client to write test code that looks like:
Basically what this would do is run the function on the client-side whenever a test subscribes to my-publication. Calls to this.added() would end up in the relevant collection (and this would work nicely because the collections are mocked out too, so there’d be no interaction with livedata), and when the sub stops, the documents would be removed again.
There’d be some complexity if you had overlapping documents from different subscriptions (you’d have implement a merge box) but you could probably get away with it in most cases.
Mmmm. I think I only agree with that statement on a technicality. Sort of a spirit-of-the-law vs the letter-of-the-law. Yes, devs may only want to test publications with unit tests the majority of the time; but it’s not necessarily the case that devs only want to test publications. There ought to also be a discussion about testing subscriptions, then. But whatevs.
How about virtual-publication instead? The following would be much better, following decades of testing culture from NASA and FDA, and doesn’t promote language that can be construed as harassing.