[SOLVED][OpLog][MongoDB] Subscription find({}) is not reactive when using $or[{}] lookup?

Hi guys, here’s my setup:

I’ve got a game where a player has X and Y. They travel through space, where X and Y can be 0 - 100,000,000 say.

I don’t need the user to subscribe to all the other players that aren’t near them. Why send the data, right?

So here’s that subscription cursor:

return Posts.find({ 
	$or:[
		{type:"ships"},
		{type:"planets"},
		{type:"asteroids"}
		], 
	last_activity:{
		$gte:time_timeout
	}				
});

This works great, it’s reactive, hooray. A player moves in to a far away distance and the other player surely is visible and they interact.

But they get ALL data. So I want to only give them data that’s near them by saying “Find everything near you” by settings an X and Y, like so:

var range = 100;
var scanner_left = player_ship.x - range;
var scanner_right = player_ship.x + range;
var scanner_up = player_ship.y - range;
var scanner_down = player_ship.y + range;

return Posts.find({ 
	$or:[
		{type:"ships"},
		{type:"planets"},
		{type:"asteroids"}
		], 
	x:{
		$lte:scanner_right,
		$gte:scanner_left
	}, 
	y:{
		$lte:scanner_down,
		$gte:scanner_up
	}, 
	last_activity:{
		$gte:time_timeout
	}				
});

The problem with doing this is, it only runs once. On page load it works great! But as a player moves the subscription is not reactive. A player must refresh the window to restart this subscription.

I have made a hack to do a Meteor.subscribe on an interval of 10 seconds, but… I think this is blowing up my server.

How can I optimize this process?

Edit: I do not have any indexed fields. Should I index X, Y and last_activity?



SOLUTION

What a length subject. Here is the tldr;

Meteor hosting using MongoDB is an extensive and difficult to learn platform. So nice people like mLab, AtlasDB, etc put together a service for you. In MOST circumstances, this is fine. But in mine, where I’m building an MMO, I need some CPU power behind it.

Database Optimization #1
Index your data

Database Optimization #2
Configure your Mongo Database for OpLog, and optimize your live queries.

That’s really all there is to it. Good luck and see you on www.StarCommanderOnline.com - the Meteor MMO.



I wanted to share my Live Queries log from Kaidra, which is insane. It tries to basically pull back like…everything and the CPU kills the server!

Ahh, thanks to Kaidra, I’m learning about Live Queries.

It turns out you really don’t want to do conditional logic too much on a publication. It’s not efficient. So in my case of searching for X and Y… that’s real bad.

https://kadira.io/academy/meteor-performance-101/content/optimizing-your-app-for-live-queries

https://kadira.io/academy/meteor-performance-101/content/improve-cpu-and-network-usage#how-to-reuse-observer

So looks like I will need to completely change my methods here. I dont yet know what to do. #Meditation

Ahh okay! I was correct. You need to better recycle live quires. Kadira also shows you this!

As you can see… this is bad. Literally none of my server publications are efficient by the looks. Let’s remove that X and Y search pattern and see if we can improve this.

Aha! I’m also doing a really inefficient look up like this.

var time_current = new Date().getTime();
var time_range = 1000*60*60*24; // 12 Hours
var time_timeout = time_current - time_range; // five minutes?

last_activity:{
	$gte:time_timeout
}	

This on a server publication method is SUPER BAD. Okay on local, but when pushed to live will break your whole system! NONE of this can be reused. Let’s strip that out and see!

Wow. There’s a big difference! I’m watching local subscriptions and literally nothing is changing! Incredible!

Pushing to production. I also reduce the CPU cores to very minimal, let’s get efficient.

PS: I hope no one minds me basically blogging here, but hopefully Google SEO will hit this and help some poor souls in the future.

1 Like

I’ve been wondering about how to efficiently do reactive subscriptions like this too. So what was your solution? Are you just subscribing to everything now?

Still working on it, I’m currently building the version for upload.

It’s actually pretty simple.

DO NOT CREATE A PUBLICATION WITH VARIABLE FIELDS

That should be a Meteor rule. But that’s kinda… odd. I’m sure there’s a way to do it, but for right now yes, I’m subscribing to basically all data.

What’s more efficient at the moment is to pull back all of it, at least until a point that I have 1000000 players which will be another problem. I can’t use a limit, because then your friend would be invisible, potentially.

So still working on it, getting closer,…

So for example, in my original post, I was pulling data back from NOW until 3 days ago. (search active players).

The result would be every CPU cycle, Meteor was running a new query, and blowing the CPU out. This scaled exponentially with more players. Six players blew the CPU up.

So, the solution for example, is find the start time of 3 days ago, which would be constant, using a function like this:

var date = new Date();
var yesterday = date - 1000 * 60 * 60 * 24 * 3;
yesterday = new Date(yesterday);
yesterday.setHours(0,0,0,0);
var time_timeout = yesterday.getTime();
console.log(time_timeout);

The result of which is currently: 1494302400000, three days ago at 12:00:00

Now on the next CPU cycle, Meteor has a constant to search with, and can simply re-use this query. Meteor gets and publishes the differences.

The tricky part is X and Y. If a player always moves… X and Y always change. So I wonder how to get around that.

Here’s a before shot of my current setup. I have like, six active players today. I should not have nearly 1 million records lol…

Wow! Look at this!

Up 12% in the first 60 seconds of app running!

Now I’m able to look at the data and see what other problems I have. Players in this case of my app can mine asteroids. This is a constant changing number and looks like it’s clogging the pipes too. I think this is the answer though.

This is my gripe about learning Meteor, this very simple lesson is hard to understand for new comers. This should be made way more obvious and not take actual research to understand

I hope this post is useful to the future of searchers… I’ll do one more post when it’s fully optimized.

Spoke to soon. By looking at the live updates filter, I can see that about 30,000 updates per minute are made.

Well looks like I’ve reached the end of the tunnel.

Meteor OpLog is the term you’ll want to google. It’s a beast. It’s what gives Meteor speed on production systems. And it’s SUPER hard to enable on your own custom deploy.

https://meteorhacks.com/mongodb-oplog-and-meteor/

Looks like I’m going to try mLab again and see if that don’t fix my problem…

MongoDB Atlas (by the Mongo people) supported OpLog out of the box.

Check this article by OK GROW for a comparison between the major Mongo services…

Yeah I think I’ll have no choice. I’m testing how to replicate servers at the moment, frigging nuts man. But hey, it’s the future; welcome to nuts? lol.

Thanks for the link.

Oh my lord, the pricing on MongoDB hosting is INSANE

Compared to mLab ?

[The forum wants me to write more than 20 characters]

Oh, you don’t have Oplog enabled? You HAVE to have it, otherwise Meteor will totally trash your database. I just host Meteor on our office server (regular old computer). Set it up myself with Mongo and Nginx :slight_smile:
The Meteor dev bundle actually has Mongo with Oplog built in btw.

https://www.okgrow.com/posts/mongodb-atlas-setup

This guide was very good, I have a database setup and running. Sadly I cannot import my old data easily… So I created some test data.

It’s much better.

Correct, I didn’t have OpLog running. Never heard of it. Haven’t seen any guides about it. Friggin crazy, but looks like it may solve the problem. I really need to import my data, but it’s giving me a friggin nightmare to import it.

Transferring data is super simple if you get Studio3T. You can just right click and copy-paste collections or even entire databases, with options to either replace or merge :slight_smile: And if you don’t have direct access to Mongo it can tunnel through SSH.

1 Like