Microservice Infrastructure

Hello,

Following up on a closed github issue:

For all companies that are adopting the microservice infrastructure, all frontend components are gathering their data from multiple apis. The way Meteor is build closely couple the collections with one mongodb layer (for the benefits of being highly reactive).

The tutorials that are highlighting how to link an API always use a Mongo.Collection (so this.added behaves as expected). It seems like this method doesn’t store (at least now) the objects created. However, is there a way to build our own Collection mechanism: so all create, read, update, and delete operations can be forwarded to a distant API?

An example of an implementation is here: gist.github.com/xethorn/09da7e346956aab3af30

Some issues:

  • Clicking on the button triggers the call to getPlayer but it removes the previous player and replace it with a new one (api.js contains no remove, just added operations).
  • Routing (done with Iron) empties the collection.
  • If you use null as a value for the collection name, the this.added does not work.

Am new to Meteor (started to look at it a few days ago). This might have been documented somewhere already, if you have examples that resolves this issue, I’d love to take a look.
Many thanks,

Michael

(to see a screen capture: s27.postimg.org/sqjhgvjmr/gif.gif - you will see the number at 1 is Takei, 2 is Michael and 3 is nobody. 1 should be Takei, 2 should show Takei, Michael and 3 should be same as 2.)

In the docs under Meteor.publish there is a big example of how to use the low level publish API: http://docs.meteor.com/#/full/meteor_publish

I dont see any issues there.
All seems to be behaving perfectly.

You seems to forget that by changing that reactive variable you are re-subscribing.
So publish is returning whole new set of records.
If you do this, than it would be behaving kinda as you expect:

 self.added('players', Random.id(), {name: 'Takei', age: 76});
    if (size == 1) {
        self.added('players', Random.id(), {name: 'Michael', age: 27});
    }
    self.ready();

And still it is wasting of resources during resubscribe, cause you are randomly generating _id, that means all these records will be sent to client as new and their old forms removed.
Just cause you are setting their only tracking attribute to random.

You can try to set there strings instead of Random.id() and observe if DDP figure out that Takej is same as from size == 0 and will sent only Michael over network.

Thanks both for helping out.

@shock:

My mistake, I’ve updated the id to be a static value.

However, the issue is still occurring. One consideration: the example is very small, but imagine instead of two players, you have n players – and you want to provide an infinite scroll on the page. The more you scroll, the more you want to show to the user. With the current code, all the data is always flushed before being reinitialized by the newly provided data. It would work when you have a small dataset - I will just send back everything that’s needed, but when you start working with thousand of items, and many concurrent users, the database gets immediately hammered (it also makes the requests slower and slower).

Content of the method ends up looking like this:

self.added('players', count + 1, {name: 'Player ' + count});
self.ready();

My current usage isn’t for players, but it’s for graph rendering. Our number of data points that are being returned are in the tens of thousands so our latency increases consistently on each load. Is there a way to do this effectively? I’ve looked into the documentation again, but couldn’t find anything that would prevent the data from being flushed.

Many thanks

that mandatory _id for cursors in normal publish is not there for fun.
It is uniq identifier so it knows which records on client could be deleted or changed etc.
If you are generating it kinda randomly using “count + 1” it does not seems to be consistent.

BTW why exactly you are not using normal simple publish above collection and are forced for this low level ?

1 Like

Unsure I follow what you are saying. The code above highlights the removal of items from the collection that is currently being stored in the frontend - even if no change or remove event has been sent to those particular items. The id and names are consistent, they are created incrementally not randomly.

Considering our data is owned by an external api – we followed this tutorial: http://meteorcapture.com/publishing-data-from-an-external-api/. The amount of data we receive for each graph is large, the reason to use collection is so the browser can save the values it already has, and only request the ones that are needed.

The experiment above shows that clicking on a button removes the data that was previously cached- and that’s our main issue (the reason why the code above uses count to generate consistently the items). We could get around by storing everything in Session, but it looks more like a dangerous unmaintainable hack.

If what I wrote above still doesn’t make sense, consider the following example: you are working at Facebook, and you want to implement the homepage infinity scroll, – considering you can only call an external API, and obviously, you don’t want to duplicate all the data into an intermediary mongodb but you still want to use the collections so the frontend can do some quick filtering.

How would you do this?
Many thanks