Do I need to run MongoDB Oplog PLUS Reids?

Trying to get my database optimized… I’m currently running MongoDB OpLog which is plenty fast, but, I have to run 3 instances.

EATS my RAM. I’m upwards of 20 gigs.

So I want to try Redis. So I’m thinking 1 MongoDB, 1 Redis? Should this use less RAM?

God damn I wish there was some STRAIGHT FORWARD directions on the ideal scalable database for Meteor. What a nightmare.

SOLUTION

No. Yes? Sort of. It turned out my OpLog was not configured right. I had the server running 3 replica set, that’s fine. But you need to set an addtional environment variable that tells the OpLog URL to look at /local, which is where the oplogging takes place!

My app is now a billion times faster.

www.StarCommanderOnline.com - the Meteor MMO

I really wonder how you get upwards of 20GB. What is the total size of your db? What does db.stats() give you? What steps did you take so far for analysing/optimising your DB? In any case, you can limit cache size when using wired-tiger, maybe that would be the first thing to do.

No, you only need redis-oplog and not the MongoDB Oplog. But it seems like that there is something wrong with your database scheme our application layout unless you have thousands of concurrent users online.

Hi guys, thanks for reply.

I have maybe 3-15 users concurrent max.

Here’s stats();

{
“db” : “meteor”,
“collections” : 9,
“objects” : 3059,
“avgObjSize” : 743.6103301732593,
“dataSize” : 2274704,
“storageSize” : 127504384,
“numExtents” : 27,
“indexes” : 31,
“indexSize” : 1120112,
“fileSize” : 520093696,
“nsSizeMB” : 16,
“extentFreeList” : {
“num” : 0,
“totalSize” : 0
},
“dataFileVersion” : {
“major” : 4,
“minor” : 22
},
“ok” : 1
}

I’m running 3 MongoDB instances in replica mode on the SAME virtual host. Trying to save money until the time comes to pay for things so thats why I’m self hosting.

For optimization, the subscriptions only load in the necessary data, which really hasn’t made much of a difference. That cache object size thing sounds pretty good.

I do notice that starting the server has low RAM, and after a day or two, she’s right full

Just tried setting that memory max for Mongo, and oohhh yeah the RAM is staying within limits! Very cool. Ill report back in a few days.

Just to be sure, are you reading the correct RAM value?

http://www.linuxatemyram.com

I can confirm that setting a cache limit on each of the MongoDB Replicasets has fixed the issue! Hovering around 6gb of RAM used, which is about what I intended. Server stability is much improved as time goes on!!!

Thanks very much. Come check us out at www.StarCommanderOnline.com (The Meteor MMO)

1 Like

UPDATE:

So I’ve been self hosting, and had connection problems right?

Turns out I had a leak in my roof, which was shorting my DSL line to have a really bad connection problem.

I put in a new line, and MASSIVE amounts of difference. I feel pretty dumb.

But, will also confirm that the cache limit still works a week later. Still uses more than I thought but, at least I have dials to turn!

Cheers all,

What’s the value of running a 3 member replica set on the same host? Why not run a 1 member replica set? (Data doesn’t get replicated, but you still get the oplog access you need.)

Can you do that? I assumed you had to use a min 3 replica set to enable OpLog ?

Jesus. So I just found out that you have to set an ENVIRONMENT VARIABLE FOR OP LOG.

This whole friggin time I’ve been running a replica set… but hasn’t been using oplog.

Just got OpLog working for sure, finally, and it runs… so good… KILL ME IM SO MAN and happy…

A one member replica set is all you need to get oplog access. That’ll save you some resources.

I was initially doing the same thing – running a 3 member set on a single host – but eventually realized there wasn’t much point to that, so stripped it down to 1 member, which works just fine.

1 Like

Excellent to know. My users are now reporting NO lag issues whatesoever!

If you’re curious to see an OpLog enabled project, hit www.StarCommanderOnline.com

I think that’s about it for this initial thread. Thanks to all who participated.

For Science.