Galaxy deployment breaks accounts-base package with MongoDb 3.4+

After days of trying to identify if Atlas or Meteor is the problem we’ve narrowed it down with the help of MongoDb customer service.

To recap:

When we switched from Compose.io to Atlas all of a sudden we couldn’t log into our app anymore. As this was amidst other problems with the free tier M0 (no Oplog support, MONGO_URL string problems, connection problems etc.) it took us longer to discover.

Our frontend app is constantly restarting every 1 minute 12 seconds, crashing 9 seconds after start. No notification or alert from Galaxy BTW (which I was expecting as you can set all sorts of alerts on other SaaS).

Here’s the log:

2017-06-14 00:54:23+01:00Note: you are using a pure-JavaScript implementation of bcrypt.
y7nb
2017-06-14 00:54:23+01:00While this implementation will work correctly, it is known to be
y7nb
2017-06-14 00:54:23+01:00approximately three times slower than the native implementation.
y7nb
2017-06-14 00:54:23+01:00In order to use the native implementation instead, run
y7nb
2017-06-14 00:54:23+01:00
y7nb
2017-06-14 00:54:23+01:00 **meteor npm install --save bcrypt**
y7nb
2017-06-14 00:54:23+01:00
y7nb
2017-06-14 00:54:23+01:00in the root directory of your application.
y7nb
2017-06-14 00:54:24+01:00
y7nb
2017-06-14 00:54:24+01:00/app/bundle/programs/server/node_modules/fibers/future.js:313
y7nb
2017-06-14 00:54:24+01:00	throw(ex);
y7nb
2017-06-14 00:54:24+01:00	^
y7nb
2017-06-14 00:54:24+01:00**MongoError: no SNI name sent, make sure using a MongoDB 3.4+ driver/shell.**
y7nb
2017-06-14 00:54:24+01:00 at Object.Future.wait (/app/bundle/programs/server/node_modules/fibers/future.js:449:15)
y7nb
2017-06-14 00:54:24+01:00 at new MongoConnection (packages/mongo/mongo_driver.js:219:27)
y7nb
2017-06-14 00:54:24+01:00 at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)
y7nb
2017-06-14 00:54:24+01:00 at Object.<anonymous> (packages/mongo/remote_collection_driver.js:38:10)
y7nb
2017-06-14 00:54:24+01:00 at Object.defaultRemoteCollectionDriver (packages/underscore.js:784:19)
y7nb
2017-06-14 00:54:24+01:00 at new Mongo.Collection (packages/mongo/collection.js:103:40)
y7nb
2017-06-14 00:54:24+01:00 at AccountsServer.AccountsCommon (packages/accounts-base/accounts_common.js:23:18)
y7nb
2017-06-14 00:54:24+01:00 at new AccountsServer (packages/accounts-base/accounts_server.js:18:5)
y7nb
2017-06-14 00:54:24+01:00 at meteorInstall.node_modules.meteor.accounts-base.server_main.js (packages/accounts-base/server_main.js:9:12)
y7nb
2017-06-14 00:54:24+01:00 at fileEvaluate (packages/modules-runtime.js:181:9)
y7nb
2017-06-14 00:54:24+01:00 - - - - -
y7nb
2017-06-14 00:54:24+01:00 at Function.MongoError.create (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/error.js:31:11)
y7nb
2017-06-14 00:54:24+01:00 at /app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:483:72
y7nb
2017-06-14 00:54:24+01:00 at authenticateStragglers (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:429:16)
y7nb
2017-06-14 00:54:24+01:00 at [object Object].messageHandler (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:463:5)
y7nb
2017-06-14 00:54:24+01:00 at TLSSocket.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:309:22)
y7nb
2017-06-14 00:54:24+01:00 at emitOne (events.js:77:13)
y7nb
2017-06-14 00:54:24+01:00 at TLSSocket.emit (events.js:169:7)
y7nb
2017-06-14 00:54:24+01:00 at readableAddChunk (_stream_readable.js:153:18)
y7nb
2017-06-14 00:54:24+01:00 at TLSSocket.Readable.push (_stream_readable.js:111:10)
y7nb
2017-06-14 00:54:24+01:00 at TLSWrap.onread (net.js:537:20)
y7nb
2017-06-14 00:54:25+01:00**Application exited with code: 1**
y7nb
2017-06-14 00:54:29+01:00**The container has crashed. A new container will be started to replace it.**

It might be related to this issue about bcrypt. See the highlighted in bold.

Atlas support says the node driver needs to be updated:

Please note that Atlas Free Tier requires to have a SNI name extension supplied with the TLS protocol. In order to fix this issue please make sure to use a driver (in your case NodeJS >=2.2.12) that is compatible with Atlas Free Tier.

It works perfectly as before when we run it directly from our IDE with --production flag. So it’s a problem in the Galaxy end.

So far no response from MDG as their customer service takes 1 business day to respond (even for paying customer)

What version of Meteor is this app using?

For the record, it definitely does not have anything to do with bcrypt. Though you would definitely gain a slight performance benefit from taking the advice of that warning.

Sorry, forgot to mention this:

METEOR@1.4.2.6

on both frontend and backend app (though it’s only the frontend that is affected by it

Relevant packages:

accounts-base@1.2.14
accounts-password@1.3.3
npm-bcrypt@0.9.2
npm-mongo@2.2.11_2

Regarding the likely solution, as you quoted from Mongo Atlas support, using Mongo 3.4 (which they use) requires a newer version of the Node.js MongoDB Driver (mongodb) which supports Mongo 3.4.

Without knowing the exact version of Meteor you’re running, I can only make an assumption, but I’m fairly sure you are running an older version of Meteor which uses an older version of the Mongo driver.

If you consult the History.md for Meteor, you’ll see the mongodb dependency was updated to a Mongo 3.4 compatible driver in 1.4.3.2. These are things which we update on a regular basis and one of the many important reasons to upgrade Meteor applications to newer versions! :smile:

1 Like

Per my above message (posted at the same moment!), you’ll definitely need to update Meteor 1.4.2.6 to a newer version. At least 1.4.3.2 will be necessary for your app to work with a Mongo 3.4 server.

Sure but you are also aware that upgrading to the latest version is putting apps in production at a high risk. Will do the patching and see if that solves it.

Thanks

meteor update --patch
=> Errors while initializing project:                                              
                                              
While selecting package versions:
error: No version of standard-minifier-js satisfies all constraints: @1.2.1, @2.1.0
Constraints on package "standard-minifier-js":
* standard-minifier-js@1.2.1 <- top level
* standard-minifier-js@2.1.0 <- top level

That’s exactly the reason why we like to stay behind. Usually something is breaking or not working. As my employee is still soundly asleep any recommendations to solve this?

Updates to dependencies (Node.js, MongoDB, OpenSSL, etc.) frequently have security updates included in them. You could make the same argument that not updating software to its latest versions is putting production apps at risk. :slight_smile:

Meteor releases are well-tested, though of course problems can occur. The best advice is for you to review the changes in the change log and make an assessment. In this particular case, your need to use a host that supports a newer version is being impeded by the fact that you’re using an older version of Meteor. There’s lots of reasons to upgrade, but not upgrading can be dependent on everything else not changing – which, for better or for worse, isn’t typical of general software development.

1 Like

What version of standard-minifier-js do you have pinned in your .meteor/packages file?

Ah, I just noticed you were doing meteor update --patch.

This won’t be sufficient enough of an update to get you to the required Meteor 1.4.3.2 since --patch will only update you to a newer 1.4.2.x release (Note the 2).

If you don’t want to go all the way to the latest release, Meteor 1.5 (which may have some slight migration steps), try running:

meteor update --release 1.4.3.2
1 Like

Done and yes, 1.5 is too steep of a step right now.

We’re up and running (at least it’s not breaking every minute) but no login button. Don’t have more time now to investigate why that is so I’d say we’re halfway there.

Many thanks though @abernix !