Microservices approach

I’ve been pondering separating out some of the heavier processes in an application I’ve built into a submodule (or series of submodules) - 90% of what I want to do is directly expose the methods/publications of the submodule to the client, without requiring they connect to (or even know about the existance of) a submodule.

I looked at some existing implementations, primarily https://medium.com/@rkstar/m-m-meteor-microservices-world-tour-part-2-3-a-m-2260df4f72d7. However, apart from not looking quite correct, it also looks like it would be exceptionally memory heavy for the main server - as it would maintain a copy of any document sent to the client.

The approach I’m working on (for publications at least) is to create a connection on the main server to the submodule, one per client connection that requires access to the submodule (created lazily upon first request), and then basically just forwarding all subscription related messages from that connection directly back to the client.

I’m interested in seeing if anyone has worked with a similar (or better?) approach before, and what the potential problems are here. I’m aware that the number of connections from server -> microservice could be large, but I’m not particularly worried about that - particularly if the only other solution is having the main server maintain copies of documents for a mergebox.

A cutdown example is here:

//service.js
export default class DDPService {
  constructor(ddpAddress) {
    this.ddpAddress = ddpAddress;
   ...
    this.connections = {};
  }

  registerConsumerConnection(connection) {
    const service = this;
    if (!this.connections[connection.id]) {
      service.connections[connection.id] = {
        upstream: DDP.connect(this.ddpAddress),
        subscriptions: {},
        methods: {}
      };
      this.connections[connection.id].upstream.onMessage = function onMessage(rawMessage) {
        const serviceInstance = service.connections[connection.id];
        if (!serviceInstance) {
          Meteor._debug("downstream connection doesn't exist any more");
          this.close();
        }
        try {
          const msg = DDPCommon.parseDDP(rawMessage);
          if (msg.msg === "added" || msg.msg === "removed" || msg.msg === "changed" || msg.msg === "nosub") {
            Meteor.default_server.sessions[connection.id].send(msg);
          }
          else if (serviceInstance.subscriptions[msg.id]) {
            msg.id = serviceInstance.subscriptions[msg.id];
            Meteor.default_server.sessions[connection.id].send(msg);
          }
          else {//just for debugging
            console.log(msg);
          }
        }
        catch (e) {
          Meteor._debug("Exception while parsing DDP", e);
        }
      };
      delete this.connections[connection.id].upstream._stream.eventCallbacks.messsage;
      this.connections[connection.id].upstream._stream.on(
        "message",
        Meteor.bindEnvironment(this.connections[connection.id].upstream.onMessage.bind(this.connections[connection.id].upstream))
      );
      connection.onClose(() => {
        this.connections[connection.id].upstream.close();
        delete this.connections[connection.id];
      });
    }
    return this.connections[connection.id];
  }

  methods(methodDefs) {
    ...
  }

  publish(localName, remoteName, beforeCall) {
    const service = this;
    Meteor.publish(localName, function publishHandler(...args) {
      const publishContext = this;
      if (beforeCall) {
        beforeCall.call(publishContext, ...args);
      }
      const serviceInstance = service.registerConsumerConnection(publishContext.connection, publishContext.userId);
      if (!serviceInstance) {
        throw new Meteor.Error(500, "No upstream connection");
      }
      const sub = serviceInstance.upstream.subscribe.apply(serviceInstance.upstream, [remoteName, ...args, {
        onReady() {
          publishContext.ready();
        },
        onStop() {
          publishContext.stop();
        }
      }]);
      serviceInstance.subscriptions[sub.subscriptionId] = this._subscriptionId;
      publishContext.onStop(() => {
        delete serviceInstance.subscriptions[sub.subscriptionId];
        sub.stop();
      });
    });
  }
}

It would then be used as such, assuming a remotePubName publication is available in the microservice:

//my-service.js
const MyService = new DDPService("someurl");
MyService.publish("localPubName","remotePubName", function checkBeforeCalling(...args) {
//am I logged in/validate args
});

Suppose we don’t litigate if you need the microservices, we’ll just take it at face value that you do.

What does submodule really mean? Do you mean another meteor application? Just connect to it with DDP-connect from the client to achieve this functionality. Don’t disguise the other meteor application by proxying it this way, because your code right now doesn’t handle privileges correctly (i.e., this.userId isn’t dealt with anywhere) nor does it handle connection lifecycles correctly (e.g. what about losing connectivity between the services?). There are an incredible number of details to get right in your proxy, and the complexity of debugging this will be way beyond what you really want to deal with.

Let’s instead litigate the microservice. If your “submodule” code is stateless, it is meaningless, from almost all software engineering points of view, to separate it out from your main application. I’d be happy to discuss what you might think you earn from moving stateless code from point A to point B in that case, and maybe everyone will discover what you actually get (usually nothing in terms of engineering).

If your “submodule” code is stateful, but using exclusively mongo, it’s very rare that you’ll achieve a desirable improvement by separating the concerns. Meteor was engineered very tightly with mongo, and the loss of efficiencies is very rarely worth the supposed gains of moving this narrowly-defined stateful code from point A to point B. Again, maybe discuss really concretely what it is that you want to do.

If your “submodule” is a meteor application that interacts with some kind of esoteric stateful code, like some piece of licensed software being used to read and write files locally, then you’re golden. Use microservices.

Thanks for the response,

As you say - let’s assume I do need a microservice - I’m pretty sure I do, the application is at the point where it needs to be scaled up, primarily due to 2 areas of the application that consume most resources. The first microservice/submodule I’m considering is the larger “easier” of the two. There are other ways of acheiving the same performance improvement, such as “just adding more servers” but due to the access patterns of this particular data, the ability to scale it independently from the main application is highly desirable.

“submodule” in this sense could be a number of things. The first one I plan to implement is a series of methods and publications that all work with the same 5 collections - which already exist in a separate database, it contains around 45 million records and will increase at a rate of approximately 2 million per month. The access patterns of this data is entirely different from the rest of the app, and is written to frequently, little of the data is accessed at the same time, however over the course of an average month, around 90% of the data will have been accessed at some point. There are also some relatively intensive aggregations performed over this data semi-regularly, intensive enough that we moved them from mongo aggregations to manual, as it’s easier to scale the application layer than the database layer.

I’ve never seen any meteor application with more than one ddp connection from the client - even Kadira utilises http calls for pushing it’s client side errors, where presumably it could have used a DDP connection if required - I’m assuming there is some reason for this. I did actually implement a “submodule” in this way - logging - the client logs each flowrouter transition to this as a service. It works ok, but its also not a mission critical application, and I’m not convinced it was the correct approach - which is why I’m looking for better solutions. It was how I initially planned to implement this submodule too - but it also makes all the client side code more complicated, as all tables (or in general, subscriptions or methods) that utilise data from these collections need to be switched to use the secondary connection, it will also make it much more difficult in the future to change the implementation if necessary.

Regarding userId - the submodule in question will be accessible only from our application servers (the submodule’s servers wont even have public IP addresses). Due to the nature of the data, it doesn’t need to know who is requesting it, the main application can check that the user is logged in, and they have permission to request the data they are requesting using the checkBeforeCalling call invoked on the main server before forwarding the requests to the submodule. However, it is entirely possible to implement a login mechanism from one server to the other as each server -> submodule connection is directly related to a client -> server connection, so we know the userId. I removed this code from the “cutdown example” above as it is specific to my use case. However, it would happen as a blocking call upon the connection.

Regarding stability - this is a concern - the example above was about 2 hours of work - so its far from complete, more a proof of concept. However, in my limited testing, it seems that the created connections automatically attempt to reconnect after a failure. However, as each connection can be short lived (e.g., close the connection after the last subscription is stopped, and the last method returns) stability here may not be too bad.

I’m also concerned regarding performance. The other approaches I’ve seen require the main server to maintain a copy of the subscribed documents - this isn’t practical for us. While in general each subscription is relatively small (perhaps 5000 smaller documents and 50-100 larger documents) each client typically requires an entirely distinct set of this data. The question is, whether the cost of maintaining a potential 1:1 mapping of connections is better or worse than fewer connections, but storing more data - I’m thinking it will be better, as most of the time it’s memory we find we’re short on, not CPU power.

1 Like

It’s pretty easy to hide these details from your front end developers. Forgive the exact details here, but the jist for methods is:

let originalCall = Meteor.call;
let otherConnection = DDP.connect( /* ... */ );
Meteor.call = function() {
  let name = arguments[0];
  // Check if the method exists on the primary service
  if (name in Meteor.connection._methodHandlers) {
    originalCall.apply(Meteor, arguments);
  } else {
    // Otherwise, try to call the method on the other connection
    otherConnection.call.apply(otherConnection, arguments);
  }
}

I think you get the idea there. That’s so much simpler, and now I can actually wrap my mind around what’s going on! Like don’t overthink it. That was 30 seconds of work, but I think it can inspire you to do something equally simple.

This is sort of small beans. There are lots of things in meteor that make working with lots of documents appear way slower than it actually is (for example, deserializing and rendering after every document is pushed, instead of in batches). Besides issues in the client, your biggest problem is you’re probably using some incredibly low-bandwidth virtually hosted instance, like cheapo AWS instances, Hetzner stuff, Galaxy instances, etc. instead of something that can actually transfer a lot of data per second out of an application server.

If your latency requirement is like “60 seconds or better,” your ETL should write the client data snapshots of whatever tables need to be rendered to S3, and you should simply download them from there on the client. The concurrency for that is practically unlimited, and it’s way cheaper than RAM. I would assume that your “aggregations” already take longer than 10 seconds to complete, so it’s not like the application is real time anyway.

I know mergebox, storing all the client data on the server, blah blah blah… I don’t know what to tell you, not even stock brokerages give you unlimited real-time streams, and in those examples the delay is 5 seconds. You just have to send less real-time data, it’s that simple.

It’s funny you mention this - its pretty close to what I implemented for the logging service. Of course, it also needs to handle logging in, what to do if the connection is unavailbe, but also not hang if the connection drops - as the application in question needs to support offline, it has to handle that case too, it also needs to handle publications, not just method calls - and since the system is multi tenant - and the users can belong to multiple tenants, and switch between them, it has to have a mechanism for specifying that to the server too. It also needs to support third party packages - not all of which make use of “Meteor.subscribe”, but rather “Meteor.connection.subscribe” or even better, “…something else…” - as is the case with tabular tables. I really have looked into this option before…

I don’t doubt it was 30 seconds of work - and the other 20 hours of work that goes into handling the edge cases isn’t bad either - in the grand scheme of things - but you haven’t addressed the question of why no other meteor applications do this (again, happy if someone can point out an example of one that does).

Like I said, I tried this - and it is messy - plus, I don’t really like “monkey patching” built in calls like this, I’ve done it where required. But if there is a better solution, I try to go for that.

Agreed - the number of documents is not large, and frankly the number of concurrent users is not currently large either, maybe 200, though this is growing rapidly. The problem is that meteor seems to excel when large sets of users share the same sets of data, and this is not true in this case. So, while no one user ever requests a large amount of data, they’re all requesting different sets - this puts significant strain on the meteor servers (mongo handles this just fine, our DB servers rarely spike above 10%). Furthermore, this is only one small piece of a much larger application - those 5000 small and 50-100 large are only the ones that would be served by the first microservice.

My latency requirement is much lower than that. < 1 second in almost all cases - with an average response time of < 150ms. We’re maintaining this 95% of the time currently. Additionally, I’m not looking at rebuilding the entire workflow here - the aggregations that can be usefully cached, are. The aggregations that are being run now are highly specific - the same aggregation MIGHT be ran 3/4 times total. It doesn’t really seem worth setting up an S3 integration to store these - talk about over complicating things!

Indeed - its the “blah blah blah” that is of interest here. The bits I haven’t thought of - such as the memory and CPU requirements of maintaining multiple DDP connections on the server, and how well that scales.

You sure seem to love your brokerage analogies - I’m not entirely sure how that relates to this? I’m not really looking at ways of making the application less featured and less responsive -

Now that we’ve spent some time “litigating” what we said we wouldn’t litigate :slight_smile: I’m relatively sure that what I need is something “microservice like” - though I’m pretty open on implementation. I’m really asking for opinions on any drawbacks people see in the approach - such as “each connection requires blah memory” or “there is a maxiumum number of connections allowed per server”, things that could sneak up and bite someone in the ass. And if someone else has implemented microservices before - and if so, what problems they encountered. I perhaps should have posted this in the “help” thread, rather than “deployment”.

1 Like