@none reactivity can be good or bad depending on what you want to do. Of course sending in a single json file cannot be reactive. but if you need that, you can use the low level publish api to create a custom publication that sends down changes as calculations get updated.
For an in-browser-analytics app, I don’t think realtime reactivity is required, even more so desired. You don’t want your charts being constantly updated, you want your data to represenet sanely a point in time and stay like that until you want to change your timeframe.
For a mapping application, you can do server side marker clustering and update your clusters on the server, publishing them to a client-only collection so that you can get the benefits of reactivity.
// server: publish the current size of a collection
Meteor.publish("counts-by-room", function (roomId) {
var self = this;
check(roomId, String);
var count = 0;
var initializing = true;
// observeChanges only returns after the initial added callbacks
// have run. Until then, we don’t want to send a lot of
// self.changed() messages - hence tracking the
// initializing state.
var handle = Messages.find({roomId: roomId}).observeChanges({
added: function (id) {
count++;
if (!initializing)
self.changed(“counts”, roomId, {count: count});
},
removed: function (id) {
count–;
self.changed(“counts”, roomId, {count: count});
}
// don’t care about changed
});
// Instead, we’ll send one self.added() message right after
// observeChanges has returned, and mark the subscription as
// ready.
initializing = false;
self.added(“counts”, roomId, {count: count});
self.ready();
// Stop observing the cursor when client unsubs.
// Stopping a subscription automatically takes
// care of sending the client any removed messages.
self.onStop(function () {
handle.stop();
});
});
// client: declare collection to hold count object
Counts = new Mongo.Collection(“counts”);
// client: subscribe to the count for the current room
Tracker.autorun(function () {
Meteor.subscribe(“counts-by-room”, Session.get(“roomId”));
});
// client: use the new collection
console.log(“Current room has " +
Counts.findOne(Session.get(“roomId”)).count +
” messages.");
// server: sometimes publish a query, sometimes publish nothing
Meteor.publish(“secretData”, function () {
if (this.userId === ‘superuser’) {
return SecretData.find();
} else {
// Declare that no data is being published. If you leave this line
// out, Meteor will never consider the subscription ready because
// it thinks you’re using the added/changed/removed interface where
// you have to explicitly call this.ready().
return [];
}
});
No it is not about which is slower, it is about how you can use the low level api to publish custom data. For example clustered marker data. Observing the marker data on the server, deciding which cluster it should end up in and then increase the count. Think of it like users and rooms => markers and clusters.
Had the same issue a year ago. It was done by different collection like aggregationData. We observe changes in the main collection and modify aggregation records for country/region/city. We found it is more cheaper than reactive aggregation or digging throught full collection on every subscribtion
I see… I know about publish custom data…
Not a correct comparison… But when you click on markers cluster, you have to show markers in their coordinates. In your example client don’t have this information…
@none you can create a template event on click (the marker cluster) to fetch the detail data (marker list) from the server using either an incremental subscription or a method call.
Helper? Or maybe still an event, depending on how they zoom.
It’ll even work if the user is a lady
(answers are cheap - but I’m not the person to tell you how to approach this best - I’ll leave the more experienced people to guide you in the right direction)
Server time per computation can be managed while user time per computation will vary too much depending on user hardware not to mention used browser and said browsers implementation of things.
Look, here example with Leaflet+PruneClaster and 1.000.000 marker. No problem with markers on client. The problem is to deliver markers to user as quickly as possible (-:
A zoom is still an event so that you can handle it with the same logic.
Your prunecluster example does not make sense in terms of comparison because its data is not updated in realtime from a server. It is just a static json document.
If you want to go that route, you can apply my comments to @kenken’s questions to your situation, basically sending either a batch of static, pre generated (perhaps within a timer interval) json or custom publications using aggregations on the server. It is all a matter of design choice.
But unfortunately, there is no “let’s send hundres of thousands of individual documents to the browser in realtime all the while tracking its server data dependencies for its changes” at a small cost.
This method works quite well from my experience. It prevents work being done at the moment of request which delivers a better end-user experience in most cases.
Hi,
My initial bottlenech was pltting svg markers. My map worked ok for 1k markers. At 10k markers it started chocking.
I then jumped over from svg to html5 canvas. Once the map and markers are rendered, panning and zooming is quite ok. Mongo’s nearest search also gave me the opportunity to create events when the mouse comes close to a marker.
So the bottleneck has changed from being the graphics and many dom elements to actually pushing the data to the client. Marker clustering definitely is an alternative solution. But what I am aiming for (at the moment) is a visual effect similar to looking at a city’s lights from a mountain at night.
Is it unfair to expect from mongo to push say 10-20MB of data to a client?
Have you tried sending the data over a Meteor method instead? I assume that new water valves will not be popping up in realtime so sending a static data structure should be more efficient.
It sounds like the DOM itself is going to be the main bottleneck.