During the day new clients login and get connected to a specific container, but don’t use the app heavily yet.
Around the same time, our peak hours, most clients will start using the app and memory and CPU will increase.
We have set triggers at 50% CPU and 90% memory, so once we reach one of those triggers a new container is spun up, but the already existing clients will remain connected to the first container.
Does anyone have any suggestions as to how to handle this?
Is there maybe some way to force reconnection of Meteor’s WebSocket connection after a new container has spun up?
Generally when a new pod comes up, when horizontal scaling is in force, it will not be at the cost of older pod, what I mean is the older pod will not be killed. I am assuming it is a HPA. So that should not be a problem.
Your load balancer should take care of rerouting the connection.
The connection issue may occur while doing new deployments, this needs to be taken care by determining the right deployment window.
But if you are saying you do not have enough traffic even for one meteor app instance, then kubernetes way may be too expensive to manage
Instead a multicore VM with multiple instances of app running would be better.