Issues migrating from Mlab to Atlas

I have a couple of old (2018 is very old in javascript terms) Meteor applications that were running fine in Galaxy and MLab.

These two apps, a user front line and an Admin app, shares the same db on Mlab. I’ve just updated them to Meteor 1.8.3 to make sure I could follow the compulsory migration demand from Mlab to Altas. Meteor 1.8.2 is the minimum requirement. I have even upgraded the the user front-end to the latest Meteor 1.11 But I am still in the works with the Admin as some packages are incompatible.

These apps are in production and I have been testing the migration using Heroku and my own development machine. I have created a free cluster in Atlas with some sample Data with the same MongoDB structure. On both environments and on both versions 1.8.3 and 1.11 (in the case of the front-end), I am having the following issues and could not connect to Atlas:

  1. Connecting with string at terminal:
    $MONGO_URL="mongodb+srv://" meteor

I have the following error: Error: invalid schema, expected mongodb

It seems like that even Meteor 1.11 does not recognized mongo+srv nomenclature.

  1. If changed to old nomenclature at terminal:
    $ MONGO_URL="mongodb://,," meteor

It gives me:
Error [ERR_TLS_CERT_ALTNAME_INVALID] [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: Host: is not in the cert's altnames: DNS:*,

Of course, if I change to ssl-false, connection is dropped.
MongoNetworkError: failed to connect to server [] on first connect [MongoNetworkError: connection 5 to closed

I have researched for any special requirements regarding SSL and got this video Connect meteor to Atlas 2020 working fine, doing exactly like I’ve been going.

Anyone can help me?

  1. Why mongo+srv does not work?
  2. Why can’t connect to Atlas. Will my production app suffer the same issue?

Before anything else just wanted to make sure that you created a user (different than admin/default) in Atlas, with read/write and you whitelisted all IPs … at least for testing.

I can confirm that both URL versions work with Meteor and Atlas. I would first make sure that I can connect from a cloud session (SSL enabled) and when Ok, try my machine. Your machine by default doesn’t use SSL.

Yes. I have of course created an user, different from admin/default, and have it whitelisted for all IPs.

In fact, I can reach the database using Studio3T with the user created and from the very same machine.

In either case. Being on my development machine or through the Heroku staging one, connections with either string are giving the same error.

The 1.11 Meteor version of my front-end application was actually created from fresh. I created a new basic app, added all the necessary packages (Both Meteor and NPM) and replaced all the code with mine. It is running OK with the local database and MLab, but very same error messages of 1.8.3 upgraded from the same folder.

I just now created a new basic Meteor 1.11 application and only made one modification to the code, adding one collection from the Atlas cluster and tested with the same command line above.

It magically worked! I could access this one collection with Meteor shell.

So what could be going on?

  1. Could there be some sort of conflict with packages holding Meteor to fully comply with Atlas?
  2. Could some of the other collections (GridFS ?) be corrupt or incompatible ?

Regarding the notation mongo+srv, it did not bring any error messages. However, collection came in empty after tested on Shell. So there is another issue there.


The problem seem to be related with the gridFS collection.

I have added packages and the collections one by one and when it came to add a gridFS collection, then it shouted the very same crash message:

Error [ERR_TLS_CERT_ALTNAME_INVALID] [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: Host: is not in the cert's altnames: DNS:*,

The question now is: How do I solve it? Is the gridFS db corrupted? Is there a way to fix it?

I am using the excellent vsivsi:file-collection package to access the gridFS collection.

Thanks. But it doesn’t seem to be SSL related. It also does not seem to be related to Heroku either as the problem happens on development machine as well and it doesn’t if gridFS collection is not referenced.

I had issues too with MLAB to Atlas - see this post for more info.

It seems to be related to both gridFS and SSL after all.

I am using the package Vsivsi:meteor-file collection and found that it was having issues with accessing ssl connections due, at the time, to the older version of Meteor mongodb driver version. The package is not being maintained since 2018 though.

At this post, the Package owner (Vaughn Iverson) gave a possible solution that I could not understand how to implement. In don’t know how to put this code on the meteor app:

@db = Meteor.wrapAsync(mongodb.MongoClient.connect)(process.env.MONGO_URL,{ sslValidate: false })

What does the @db mean?

I guess the long term solution would be to change the gridFS package.

Which one would you recommend nowadays? Is there an npm package that allows publish / subscribe?

1 Like

Why do you need gridFS? Atlas and any other host charges (also) by traffic and DB volume. Do you need to store documents(images, pdfs etc)? If yes and no worries for the privacy, you can upload to AWS S3 and serve via Cloudfront. (
If you want privacy and AWS S3 or GridFS and an up to date package you could go for

Yep! Maybe you’re right.

However… This would mean reorganizing all my data, downloading all pictures and re-uploading in another format (S3), paying for another server, changing a lot of my code.

I guess I would have to do it one day…

I wish there was a quicker gridFS way so I could do this transition in a more adequate pace.

Ok, GridFS on a local Mongo DB in the same infrastructure where you are. Heroku is not so great as a PaaS. You would best fit in Digital Ocean or AWS. So keep 2 DBs, local for GridFS but you pay for transfer out from your Meteor instance and your Meteor data in Atlas.

Anyway, moving assets from GridFS to AWS S3 is not too complicated. If you can map through all your media somehow and build the URLs, you can just push to S3.

Check this piece of function:

const request = require('request')
const put_from_url = (url, key, callback) => { // key is your path to where you send in S3
    url: url,
    encoding: null
  }, (err, res, body) => {
    if (err) {
      console.log('SMTG Happened, ', err)
    } else {
        Bucket: 'xxxx',
        Key: key,
        ContentType: res.headers['content-type'],
        ContentLength: res.headers['content-length'],
        // Expires: 'Thu, 15 Dec 2050 04:08:00 GMT', // if necessary
        CacheControl: 'max-age=8460000',
        Body: body // buffer
      }, callback)

You can run in a throttled function and upload 1 per second.
Or check this: