Docker stats CPU above 100%

Can someone clarify how we can do this on a multicore machine? For instance, If i’m understanding correctly, if you have 4 cores and 100% CPU usage, then that’s either the 4 cores running at 25% each OR 1 core running at 100%? The former seems “healthy” while the latter is potentially a problem (at least in our use case).

AI is great at answering these Node-related questions.

With 4 cores, you would set your autoscaling trigger to something around 20%. With 2 cores you would set it at 40%.

If you have very complex methods with a lot of calculation, statistics, big data processing etc, you would need a CPU optimized boxes, but not with more CPUs, rather better CPU(s).
If you do a lot of reactivity, you would probably go for memory optimized boxes - 1-2 CPUs, and a lot of memory. With 2 CPUs you would actually use half the memory of your box.

NodeJS is for web applications, deliver content faster than anything else. Technically, for this, you should not need godzilla level of CPUs. You need to deliver more, you balance more machines. You need to calculate more, do intensive processing like media encoding … don’t go for Node.

I think nobody here can clarify how you should build your infrastructure. Only you should know how to do that, it is specific to every project and requires education and training.

"
Why Node.js Using One Core Can Cause Crashes
1. Single-Threaded Nature of Node.js:
• Node.js runs JavaScript code on a single thread, which means it can only utilize one CPU core for executing JavaScript tasks. Even if your EC2 has 2 cores, Node.js won’t automatically distribute its workload across both cores unless you explicitly use clustering or worker threads.
• If the single thread becomes blocked (e.g., due to a CPU-intensive task), the application may stop responding, leading to crashes.
2. CPU Usage Misinterpretation:
• A 60% CPU usage on a dual-core EC2 instance means one core is likely fully utilized (100%) while the other is idle. This suggests that your Node.js application is maxing out its single-threaded capacity.
• When the main thread is overwhelmed, it can’t process incoming requests or events, causing the application to hang or crash.
3. Memory Exhaustion:
• Crashes are often caused by memory issues rather than CPU. If your application consumes more memory than available, the system may start swapping, leading to performance degradation and eventual crashes.
• Node.js’s garbage collector may also struggle under heavy load, exacerbating memory issues.
4. Blocking Operations:
• If your application performs CPU-intensive tasks (e.g., data processing, encryption) on the main thread, it can block the event loop. This prevents other operations from being handled and may lead to timeouts or crashes.
5. Insufficient System Resources:
• Even if CPU usage is moderate, other resource constraints (e.g., insufficient RAM, disk I/O limits) can cause instability.

"

You can use something like this: GitHub - meteorhacks/meteor-down: Load testing for Meteor
You might need to re-write it a bit, but this will allow you to run methods multiplied by number of connections and observe how your CPU and concurrency perform.
You would start perhaps with 20 connections and go up until you either crash your server or get timeouts because you reach a connection limit.

Thank you for your reply
So what should I do? Now my app is running on 3 main machines, each machine has
2 CPUs, 4GB Ram, and 1 Load balancer. It still crashes. Is there a better solution?

1 Like