I’m running an application where we have a couple thousand HTTP get requests happening. I run them in a fiber, running 250 in parallel at any given time and then moving onto the next batch.
When I run the application locally, it runs fine without any problems. When I host it on Meteor Galaxy on the 512 MB container, the application hits 100% CPU usage and crashes, even with only one user on it. When I upped it to the 1 GB container, it still hit 100% CPU usage but came down a bit later and became responsive. This is still with only 1 user using it.
I ran Kadira and it seems that the HTTP requests are the one’s taking the most time. The response times of the pub/sub is not long, so that’s not what is taking too much CPU power.
How can I optimize these HTTP requests to work more efficiently, so I don’t have to keep upping the containers each time?
Are you sure that it is batching them in groups of 250, and not just running them ALL in parallel?
How many requests are being run in total? 1,000… 10,000?
It’s definitely batching them in groups of 250, I tested this out to make sure. I was doing them all in parallel before this and the application would crash entirely on Galaxy but batching it has stopped that. Overall, I want to be able to run 10k+ HTTP requests (250 or 500 at a time) but at the moment, I’ve limited it to approximately 1,000 (250 at a time) in total in order to not overload the CPU.
Have you been able to resolve this? What was the issue?
We’re having an exact same problem here. A single active user hitting the subscription on a 100% COMPACT container after which it crashes.
Hey, I think the overall problem was where I was making too many HTTP requests in parallel. @robfallows cleared it up for me telling me that if I’m making 500 requests in parallel for example, then the application will have to reserve 500 times the memory for one request. That’s why my application was crashing. I reduced the number of simultaneous requests and that solved my issues.