No, meteor runs on node and node is inherently single threaded (event loop). The only way to run on multiple cores is to cluster to multiple instances.
The quote you mentioned is just the way you write server code, you write synchronous code but it runs asynchronously (read up on fibers).
My 2 cents on job queues: I’ve scaled to a much larger infrastructure than just meteor servers, so we have microservices for specific processes. The central communication process for all this is a messaging layer, think rabbitMQ, kafka, google pub/sub, redis. Utilizing this for job queues is more secure, stable and flexible and allows you to have multiple microservices run jobs concurrently or run microservices written in multithreaded languages (golang for example) and process the jobs in parallel that way. Also consider serverless architectures to process certain things (aws lambda, gestalt), can be extremely cost-effective in some cases.
Of course this has nothing to do with a meteor/npm package and for smaller/early projects that’s overkill, just wanted to address how to scale the processing of jobs, either cluster meteor servers or build on microservices/serverless architecture.