Meteor Scaling - Redis Oplog [Status: Prod ready]

Ofcourse, it’s currently working with collection2 and hooks and it will continue to do so. I did not say that as a statement, I said that in the context if we do “X”, but we won’t do “X”

1 Like

OH! I guess I missed that! Thanks so much!

Maybe there’s someone here encountered the same problem I have here https://github.com/cult-of-coders/redis-oplog/issues/103#issuecomment-271196264, please share the fixes.

Thanks!

@cagapejuvy, I am already attempting to help you on GH, no point duplicating in the forums (and it’s not the place for it). Also, it’s a usage issue (you are not setting up redis properly or are not specifying credentials properly)

2 Likes

Any advice on sizing for Redis instances?

Does RAM really matter all that much? I would imagine it may not since redis-oplog appears to use the pubsub capabilities and not really using redis as a cache.

Anyone have any experience with the AWS ElasticCache Redis instances?

I’m thinking of going with the cache.t2.small size but Amazon warns that it has low-moderate network performance. Not sure what that means…

A small redis instance should be enough :slight_smile: It’s very efficient. Pick the cheapest one.

2 Likes

You’re right. I was thinking of the situation where you create ID’s manually (something I often do for various reasons). But that won’t happen across different coroutines/servers.

Anyway, there’s no good solution without making everything enormously more complicated. Using the cache as the source of truth just puts the problem elsewhere. Let’s see whether people actually run into issues or if it’s just a theoretical issue.

Would it perhaps be a good idea to write a small multi-thread / multi-instance script that mimics the kind of load that may cause such inconsistencies and try out with different load and server capacity options? This way, we can at least have some sort of a statistical model that depicts the actual probability of such situations. It would then allow anyone to make an educated decision as to use this or not.

2 Likes

Well, for this specific issue, it’s more a question of: how many people have this specific use case and can get into problems if messages are out of order.

But such a test like you describe would be very useful to find possible other problems. Distributed systems are hard and I wouldn’t be surprised if there are other problems we haven’t thought of.

2 Likes

Update

Worked my ass off in the past week to make this baby production ready. There were many things that had to be rethought, modified, added and adapted. Good news is that we’re getting there.

Will release this week or the next.

Yes, very good idea. But doing so is a big project on it’s own.

13 Likes

Are we talking efficient enough to be able to use the 30MB free plan on redislabs? :grin:

3 Likes

@copleykj yes ofcourse, we will use only 0 mb, because redis-oplog, only uses the pub/sub system of redis :smiley:

3 Likes

Fantastic work, @diaconutheodor! Look forward to using this package - have been waiting for it!!

They should give you the Meteoric Medal of Freedom!

Cheers
raskal

2 Likes

I’ve never used Redis in my life, so excuse a stupid question.

I can see this free plan on RedisLabs using 30 so called connections. Does it mean it can connect to up to 30 of our Galaxy instances? Or it can serve for up to 30 concurrent users?

@gusto we need 2 connections for every instance a pub and sub. So it can connect up to 15 instances. And usually an instance should hold around 100 users without a hassle. so 1500 concurrent users can be served.

@raskal thank you, cheers.

4 Likes

Thanks. :slight_smile:

Hey @diaconutheodor the “usually an instance should hold around 100 users” is a common scalability tagline for standard oplog tailing.

Is this still the same with redis replacement? I understand that redis allows multiple instances to scale better where oplog is known to be limiting when number of instances increase.

But I kind of had the impression (granted I had not thought about this before) that per-instance performance would have increased. Would you care to comment? Thanks!

1 Like

@serkandurusoy so, first of all theory != practice, we all know that, I need to actually test before I make any claims.

In theory if you use redis-oplog the right way, with fine-tuning and stuff, the performance impact of the reactivity would be minimal.

This means that it should hold the same amount of instances as a simple Meteor without oplog at all, but this depends on many factors so it’s a difficult question to answer.

1 Like

Update

Release 1.1.6

This is our most stable redis-oplog yet.

The next release will contain:

  • Code documentation and bringing it to a level of standard that Meteor community likes :slight_smile:
  • Write a propper documentation and explain the data-flow.
  • Do some BENCHMARKS!

We’ll keep rocking and get as stable as possible!

24 Likes

OK so I have been testing this out.

As I understand there might be minimal performance improvements unless I start using custom channels, right?

What would be awesome is some kind of way to specify a default channel on a collection.

For example I have many collections that I would (almost) always want to have a userId as part of the channel specifier.

Otherwise as I understand it I would have to modify my entire application to be redis-oplog aware…

What do you think?