Galaxy container has way too many connections

I have an client app on Galaxy that’s running on a single compact container. It’s usually at 10-12 connections. However, I saw the site was down (data not loading) and was a bit shocked to see the connections hovering around 190. I immediately instantiated a load of new containers and it seems that very gradually the load is being shared.

At the time of writing I still have over 120 connections on the single container. If I load the site using that container the data won’t load, any of the others (8-10) connections right now are snappy.

My question is — I can kill the container with the bulk of the connections but I’m not sure if that’s a good idea. Will those clients reconnect automatically to another container or will the user have to refresh? I assume all new connections go the the new containers but the old one is still massively overloaded and I want to solve that problem more quickly in future.

Ah, I figured no-one could connect to that container anyway so I killed it. It appeared to restart and the load was immediately distributed between all the other containers. Nice!

Now I’m trying to work out what the optimal number of containers is. I have about 20-25 connections per container and all seems pretty fast. I’ll keep removing containers one at at time to see how performance is affected.

To all those that say Galaxy is too expensive this is where the value is. I’m really glad I’m able to fix this issue trivially. If I was on a self hosted DO droplet this would be a nightmare.

1 Like

Turns out we’re on Reddit. The site is an aggregation of clothing sales from around the web —

We got really lucky on this as I just happened to check the site. Does anyone know of a way I can get an alert if a container has too many connections?