Redis Oplog and AWS ElastiCache

I’ve been a long time production user of redis-oplog and I have been using it with a cloud redis server from Well yesterday they had a 12+ hour outage which hosed my app (along with many others) and I had to scramble to switch over to I wanted to set up an AWS ElastiCache cluster but the setup was too complicated to figure out on the fly. Now that the app is stable on ScaleGrid I was wondering if anyone has experience with setting up ElastiCache (@ramez @diaconutheodor). I have a few questions about the settings.

  • Does it work with the replicas setting? I’ve only ever used redis-oplog with a stand-alone instance (which is why I want to use a replica for failover). Does failover actually work? It seems like when you use replicas you have a write URL and a read URL. How does that jive with the single redis-oplog url parameter?
  • What’s a good EC2 node type? Should work with a pretty small VM right?
  • Does multi-availability zone work?
  • Encryption at rest? Encryption in-transit? Anyone have these working with redis-oplog?

I’m assuming backups aren’t needed since it’s just a cache. Would love some feedback, best practice, and info on this.

I’ve set this up. It’s not too bad, but I never got the failiver working properly. Not sure if the problem is with redis oplog, my config in meteor or my aws config. On the bright side neither the production or staging cluster I set up has had an outage in > 18 months. I used a t2.medium and it was way too big, but it will depend on your usage. Id advise setting up an entire new cluster rather that trying to resize an existing one and switching over the new one. Given that the failiver doesn’t work nicely, neither does multi zone availability unfortunately. Happy to share my config privately if you like

1 Like

Thanks for the VM recommendation. Yeah, I think we’ll be able to get away with something pretty small too. Even with a lot of users it doesn’t seem like Redis has much problem with throughput as a cache.

Did you get any of the encryption or SSL stuff working? So you’re only running a single stand-alone instance too? That scares me somewhat after what happened with As our app relies on the reactivity with other users. It’s not just UI icing for a single user. So if that goes down, our app becomes non-functional.

And sure, you can DM your config. I may have more questions. Thanks!

We have a second instance, but the failover doesn’t seem to work, you’d have to trigger it manually do not much point. Encryption in transit yes, encryption at rest I didn’t bother with, no need. I’m away for the next week. Will get you the config when I get back.

Is it possible to access ElastiCache from Galaxy? A lot of the articles mention only being able to access it from within AWS on your VPC. It doesn’t seem to be accessible via a typical redis:// URL.

My Galaxy deployment is technically in the same region, but of course it’s owned by Galaxy and not me. Are you using ElastiCache with your own AWS hosting? Or have you managed to get it to work with Galaxy and/or hosting outside of your own AWS?

You can for sure access over a redis:// URL. I can’t imagine it wouldn’t be possible, if I had to guess you’d need to configure security groups to allow either all public IPs (little bit risky) or the specific IPs of your galaxy containers. If Galaxy uses Amazon VPCs too, in theory they could setup a VPC bridge to allow communication between their cluster and yours. I don’t know if they expose that to customers though.

I’ve only needed it within my own VPC.

1 Like


Really sorry for taking so long to answer

We use Redis (Elasticache) on a single t3.micro instance (reserved to save money) and we barely use any capacity. Nothing fancy, same VPC as the Meteor instances (scalable on ElasticBeanstalk).

We didn’t need to use any encryption as all data transmission is within a private VPC.

We use the DNS name for the url parameter of redis-oplog < instancename >.* and port 6379

@ramez You host Meteor yourself on AWS. You’re not using Galaxy right?

@evolross, right. We use Elastic Beanstalk in the same VPC as Redis instance

1 Like