Native cluster not working?


#1

Hello !
I have created a meteor app with option --minimal.
Then i tried to use node.js cluster like this :

import { Meteor } from "meteor/meteor";
Meteor.startup(() => {
    let cluster = require('cluster');
    if (cluster.isMaster) {
        let worker = cluster.fork();
        worker.on('exit', function() {
          console.log("Worker id " + worker.id + " exited.");
        });
    }

    if (cluster.isWorker) {
        console.log("Hello. I am worker number " + cluster.worker.id);
    }
});

And it’s not working, i really don’t understand why but it’s giving me this error :

I20181208-16:39:48.813(1)? Hello. I am worker number 1
W20181208-16:39:48.841(1)? (STDERR) events.js:183
W20181208-16:39:48.844(1)? (STDERR)       throw er; // Unhandled 'error' event
W20181208-16:39:48.848(1)? (STDERR)       ^
W20181208-16:39:48.849(1)? (STDERR)
W20181208-16:39:48.851(1)? (STDERR) Error: bind EADDRINUSE 0.0.0.0:28202
W20181208-16:39:48.856(1)? (STDERR)     at Object._errnoException (util.js:992:11)
W20181208-16:39:48.858(1)? (STDERR)     at _exceptionWithHostPort (util.js:1014:20)
W20181208-16:39:48.859(1)? (STDERR)     at listenOnMasterHandle (net.js:1415:16)
W20181208-16:39:48.861(1)? (STDERR)     at shared (internal/cluster/child.js:115:3)
W20181208-16:39:48.863(1)? (STDERR)     at Worker.send (internal/cluster/child.js:86:7)
W20181208-16:39:48.865(1)? (STDERR)     at process.onInternalMessage (internal/cluster/utils.js:42:8)
W20181208-16:39:48.867(1)? (STDERR)     at emitTwo (events.js:131:20)
W20181208-16:39:48.869(1)? (STDERR)     at process.emit (events.js:214:7)
W20181208-16:39:48.870(1)? (STDERR)     at emit (internal/child_process.js:772:12)
W20181208-16:39:48.871(1)? (STDERR)     at _combinedTickCallback (internal/process/next_tick.js:141:11)
W20181208-16:39:48.875(1)? (STDERR)     at process._tickDomainCallback (internal/process/next_tick.js:218:9)
I20181208-16:39:48.877(1)? Worker id 1 exited.

NB: I also tried to use import cluster instead of require but it doesn’t change anthing.

If you have any idea, i would greatly appreciate your help on this !


#2

I’ve not used this package, but it looks to me that the port the process is trying to use (28202) is already in use. I’m guessing here that Meteor is restarting and the startup block is being called multiple times resulting in port already being used.


#3

I only have meteor launched, and before it starts, the port is free. In addition, the port is always random when meteor starts, it changes every time.


#4

Can you check if there are any processes being started on that port? I’m just wondering the fork function is being called multiple times, it’s just a shot in the dark.


#5

Well, as i said, the port is totally random and i have no process running other than meteor. There is only this code on my app, it’s a simple test … you can try. I don’t understand but yes it’s seems the fork is called 2 times but how ?


#6

I’m not familiar with cluster native but as I understand it, it literally spins up a copy of the running code - which will always spin up meteor listening to the same port (as I suspect this is an environment variable).

You could try passing { PORT: 3001 } or something like it to cluster.fork but I don’t think that will work (except possibly in a production env). What are you hoping to achieve here though, this seems like it would be equivalent to just running multiple meteor processes


#7

Thanks, it works ! I have a main process for users which call an other one dedicated for heavy tasks. I need a worker pool for the heavy tasks server and i found nothing that allow me to use every CPU threads and functions created following meteor docs.

I tried the npm package workerpool, but i got an error on the first line when i try to specify a module on the word “import”. I also tried PM2, but it’s a load balancer for multi-users and 1 server. With my setup, i got 1 connection from the main app the 1 server which has to compute some heavy tasks.

That’s why i need to make my own … even if it’s very strange that i found no one with the same problem and already done something public. If i missed something, you can tell me :slight_smile: