Cross Container Ephemeral Server Vars - Is It Possible?

I have an admin app and a client app that share the same Mongo DB and Collection definitions (via symlink files). The client app users (1000+ simultaneous) all need an individual status at the same time that has to be heavily computed based on several Collections in the database.

To do this, my admin app, at the right time, computes a set of statuses for all 1000 client app users and stores it in an ephemeral server-var dictionary. The set of statuses is totally ephemeral, refreshes often, and does not need to be persisted to DB and I’m actually trying to avoid that to reduce my DB load from all the client app users.

So my client app users all retrieve their status using a Meteor Method. However, the client app cannot just call Meteor.call('getUserStatus', userId, ...) because the ephemeral dictionary that the Meteor Method needs to access is on a different server. No problem, just do this:

var adminAppConnection = DDP.connect('https://myadminapp.abc');
adminAppConnection.call('getUserStatus', userId, ...)

And this works as expected. The problem I think I’m going to have is in production where my admin app has multiple containers on Galaxy, if I just connect via the URL of my admin app, isn’t it likely that it could connect to a container that is not the one computing the ephemeral dictionary that my client app needs? I’m assuming ephemeral values like this are not shared between containers. They only exist on the container where the admin app user caused them to be generated.

Is there any solution for this? The only things I’ve thought of so far are:

  • I could publish the ephemeral dictionary of statuses using this.added(...) and subscribe to it in the client app using a client-only collection (which is kind of cool), but even if I used a limit: 1 to just get single status (based on the client app userId) every client would call that publication that evaluates and creates the dictionary. The idea behind the above pattern is to call a Meteor Method and get a simple text status back that’s already been computed on the (admin) server so the client doesn’t have to hit mongo, download a publication, or do any work.

  • Persist the status dictionary to an actual database Collection and just query it via a Meteor Method (to avoid pushing all statuses to the client), but that means every client (1000+) hits the database at the same time to get their status which I’m trying to avoid (though this may be the most reasonable non-third-party solution).

  • Get the URL to to the correct admin app container (there must be a way as Galaxy can show you the URL of a given container and connect you to your app using it - e.g. https://myadminapp.abc/?_g_container_=<id>&_g_debug_=true) and make sure the client app is connecting using it. This could be problematic though as I assume that URL changes all the time if your containers change out for whatever reason. So there would have to be some kind of updating if that URL changes and that may not work in my use-case. The admin app user can sign out and the client app still needs to get that status from the dictionary, so there may be no way to know what the correct, current URL is of the admin app. I do have parent document that connects the client users and the admin app to one “grouping” and I could save the admin app URL in that document, but the URL could still get updated after the admin user signs out. This just seems hacky and it also assumes my admin app is going to handle 1000+ simultaneous DDP.connect calls well.

  • Use something like redis-oplog's Vent tool, where you pub-sub using Redis. Perhaps I can publish the dictionary or even each user’s status and they access it via Vent. This may also be a good solution.

1 Like

Just an update on this. I’ve found two work-around/solutions for this.

The first one is something I didn’t think of above: If I’m willing to eat up memory and CPU on my client app servers, you can create the Dictionary in memory on the client app. I didn’t consider this because the trigger to generate has to come from the admin app, but you can add a server-only Collection.find(...).observe() in the client app code and it can listen for the appropriate flag update. Then each client server creates the dictionary of statuses and is retrievable by the all the client app users. It’s still just a dictionary of user statuses in memory, so it doesn’t hit Mongo and a Meteor Method easily gets the short text status from the dictionary based on a client app user id. In my admin app, I was using matb33-collection-hooks in order to “listen” for the admin user to trigger the computing of the status dictionary at the appropriate time. And even though that collection-hook code is shared by both the admin app and the client app (via symbolic links of my Collection files), this was a problem because collection-hooks only run on the container they’re triggered by. They’re not observers. And they have to be triggered by a user-driven Mongo call like an insert, update, remove, etc. So when my admin user triggered the collection hook, the dictionary was not being made on the client servers. So I had to remove the collection-hook and do a Collection observer in the client app server code. And this works nicely and the master dictionary of dictionaries is replicated across all client app servers because they all run the same Collection observer.

The second solution is to use redis-oplog's Vent. This works nicely actually and has a lot of benefits. I can set the trigger back to the admin app’s collection-hook to make the dictionary because the dictionary gets published to a central Redis server that each client server is also connected to. The dictionary now is no longer even a JS object dictionary, it’s just a loop of Vent.emit calls that our pushing each client app user’s status to Redis. Then on the client app template onCreated, you simply listen for Vent to publish that client app user’s status with their client app id. This doesn’t use any extra memory on the servers, keeps the CPU computing of the statues on the admin app, and doesn’t involve an additional heavy collection observer, and it’s very fast. It just requires a Redis server.

2 Likes