It works like this: When a DDP client connects to the Meteor server, that connection is represented by a Session instance. All sessions (representing all DDP clients) are stored in Meteor.server.sessions. Each session carries a unique id, and this id is also attached to the invocation context (the familiar this) of each Method call. So we can use this attached id to trace back to the client session who called the Method:
const allConnectedClientSessions = Meteor.server.sessions;
Meteor.methods({
myMethod() {
/*
Note: This code is for illustrative purposes only. When using
pub-sub-lite's enhanced Methods you just write your mutations
normally without the need to know any of these details.
*/
// `this` is the invocation context mentioned above
const clientSessionId = this.connection.id;
const clientSession = allConnectedClientSessions.get(clientSessionId);
// Each session carries a `send` method, allowing server to send
// messages to that particular client
clientSession.send({
msg: 'changed',
collection: 'books',
id: 'NzrGsj9ooJnQwbDfZ',
fields: { numberOfBooksSold: 99 },
});
}
});
The changed message above is structured according to the DDP specification. When receiving this standardised message, the client will automatically update Minimongo to reflect the change.
You can find the relevant code in pub-sub-lite here.
First, this sounds like a great performance boost for Meteor apps that scale. I’ve read through this thread and feel like it has terrific promise. This quote is the part of things where I’m unsure what the change would be using this vs true pub/sub.
Given a Meteor app that has a social component where many parties are connected clients seeing the same thing (like a project management app), I’m imagining a scenario of 10 connected clients where one of them triggers a change that propagates through pub/sub lite. If the attached id is traced back to the client session who called the Method, does that mean that the one client that triggered will see the change, while the other 9 that didn’t call the Method won’t see it?
Sorry if I’m missing things but just want to understand how this would work in an app with many connected clients all tuned into (essentially) one data source.
Thanks @alawi. I’ve thought about the possibility of sending mutation update messages to all clients possessing the affected document(s). However, that would require the server to maintain a copy of each client’s data, in order to determine which documents (and which document fields) should be sent to which clients. In fact this is the basis of how Meteor pub/sub works, and this approach significantly reduces Meteor pub/sub’s scaling potential (the more clients subscribed, the more client data snapshots the server needs to keep).
For pub-sub-lite, I have an idea that can partially solve the issue: If the client calling a Method is logged in, we can find all client sessions belonging to the same user and push update messages to all of them. The implementation for this will be very straightforward, but its usefulness will be obviously quite limited, as we don’t support 1) anonymous clients and 2) different user accounts possessing the same document(s).
I’ll invest time this weekend to study the codebase of the awesome redis-oplog package to see if there’re lessons and insights we can apply to pub-sub-lite.
Yeah the Meteor server MergeBox, but this is the reason why Meteor pub/sub requires higher RAM usage.
Why not broadcasting only the changes to all clients, and let each client manage their data, if the data is detected to be out of sync (by missing a broadcast message), then let the client fetch everything again from the server.
You can perhaps leverage meteor-streamer, I think rocket chat uses that technique to scale to thousands of sessions.
@npvn Hi, thanks for your amazing package, I see that your package relies on Mongodb Change streams.
I have already worked with change streams and built 2 experimental packages with it:
These packages were never finished, because change streams were consuming lots of memory and opening +100 change streams would lead to serious performance issues.
What about your package? Have you tested it for at least +1000 connections? What are the memory consumption, db performance?
Would be happy to know that change streams are fast nowadays, because I can finally finish my packages above.
Thanks.
Yes, by disregarding the need to send exactly the data clients need down to the fields level, the server should no longer have to keep client data snapshots. We may end up “wasting” bandwidth (by sending more than what is really needed by clients), but bandwidth is usually not a bottleneck factor (compared to server resources). It seems that @mitar has explored this path with control-mergebox. I’ll see if there’re things we can learn from that package.
The performance potential of meteor-streamer really impressed me, and is definitely something we can consider instead of using Meteor’s built-in DDP sender.
But the most important thing we’re missing here is a way to track which clients are “interested” in which document(s), so we know who to send update messages to when changes happen. I have an idea: Up until now we haven’t fully exploited Meteor.publishLite and Meteor.subscribeLite. They have been merely used as an API for converting existing pub/sub to Methods, and in fact my original intention for them was just to target legacy code. But I have a paradigm shift now: We can make them first-class citizens, and leverage the arguments passed to them to construct a registry of clients and their documents. This will be similar to the way Meteor pub/sub record this information, except that we aim for a looser data transfer mechanism and don’t keep client data snapshots.
Many thanks for your ideas and suggestions @alawi. They really helped point me to the right direction!
@kschingiz Thanks for sharing your projects with me. I’ve taken a look and they’re really interesting experiments! Opening a large number of Change Streams indeed is a performance concern, because each stream will open a new connection to MongoDB. You will potentially face two kinds of bottleneck:
The number of streams is larger than the current MongoDB’s poolSize (the maximum number of connections a MongoDB client can make). When you exceed this limit, subsequent requests will need to wait and that significantly slows down response time. Unfortunately the default poolSize in MongoDB Node.js driver is only 5 (a value set for “legacy reasons”).
Even when you have enough poolSize, too many connections will put high load on both the Meteor server and the MongoDB server.
pub-sub-lite solves the first challenge by setting poolSize to 100 by default (a number inspired from the default value in the Python Mongodb driver), and allow package users to customize this value. Also, pub-sub-lite tries to avoid the second challenge by limiting the number of streams opened for each collection to at most 1. If a Method invocation needs a stream for a particular collection, it will check if an existing stream has already been opened for that collection (by previous Methods) before attempting to open a new stream. Streams are closed as soon as possible once they’re no longer used by any Method invocations.
This approach ultimately keeps the theoretical maximum number of connections equal to the number of collections. In practice this number will be even lower, because the number of collections being mutated at the same time is usually small.
You can find the relevant code in pub-sub-lite here.
But keep in mind that this needs to work for a server clusters, rocket chat uses the DB to sync the event emitters, others use Redis or another 3rd party pub/sub.
My pleasure! and thanks again for the package, I’ll surely be using it when refactoring
This sounds really awesome. I’m a little confused how this differs from regular reactivity since you seem to be using the Mongo change streams. How would I decide between using regular pub sub and pub sub lite?
I think I can answer that and npvn can keep me honest.
It updates the the caller client only and not the other clients. The package (as of now) is meant to be used to quickly refactor unnecessary pub/sub in order to manage performance bottleneck if any.
Thanks @alawi. Yes at the moment the package can be considered as a convenient API for quickly converting legacy pub/sub to Methods, as well as providing some convenient features for making Methods more flexible (caching, Minimongo merging, mutation update messages).
@hexsprite For now MongoDB Change Streams are used only to observe changes made during a server-side Method invocation, and all streams will be closed as soon as the mutate operation finishes. So it’s not a complete replacement of pub/sub observers (yet). In order to determine if the package is right for you, I think you can consider the following points:
Is multi-client reactivity essential for your use case? If so, choose traditional pub/sub.
If multi-client reactivity is not essential, you may consider pub-sub-lite if either (or both) of the following apply:
Do you have existing pubs/subs that you consider no longer necessary and want to convert them to Methods (e.g. for improving performance)? If so, pub-sub-lite’s Meteor.publishLite and Meteor.subscribeLitehelpers can streamline that process (instead of having to do a lot of custom refactoring).
Are you interested in the convenient features of pub-sub-lite’s Enhanced Methods? More specifically, it can cache Method calls (and their result data), automatically merge data into Minimongo, and emit mutation update messages from server to the caller of the Method.
For future development I am exploring the possibility of having pub-sub-lite emits changes to all clients interested in the changed data set, thus aiming to functionally replace traditional pub/sub with a more lightweight alternative. There are challenges that we will need to overcome though.
@npvn Hi, I really like this package. Does it support deleting records? I’ve tried to build a really simple example application that uses ValidatedMethods to create, update, and remove items. When I remove an item the minimongo collection still has the item until I refresh the page.
I’ve put in some console.logs that indicate the collection is correctly updating on the client and the server, but when I look at the collection the document is still there.
Is there something that I’m missing?
Mahalo,
Cam
Technically yes, since pub-sub-lite doesn’t alter the original pub/sub flow. That said if you’re using redis-oplog then the performance gain from it may alleviate the need to convert existing pub/sub into Methods, which is the main purpose of the package at the moment.
In the future I’ll look into further improving pub-sub-lite so that it can replicate the full functionality of Meteor pub/sub, perhaps by utilizing MongoDB Change Streams. That is something I hope I’ll have time to work on soon.