Mup version (mup --version): 1.2.11
Because the old docker images are using out all the space on my server, I cleaned the images with the commands:
docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi**
and manually clean the very old images (2 years old)
These are my current images:
REPOSITORY TAG IMAGE ID CREATED SIZE
jrcs/letsencrypt-nginx-proxy-companion latest 5ca43d984da0 10 hours ago 86.8MB
jwilder/nginx-proxy latest 897ee6d88293 2 days ago 149MB
abernix/meteord base 9d1d8b1b94b2 3 months ago 509MB
jwilder/nginx-proxy e68a002cebec 7 months ago 147MB
mongo 3.4.1 0dffc7177b06 14 months ago 402MB
mongo latest 0dffc7177b06 14 months ago 402MB
debian latest 7b0a06c805e8 17 months ago 123MB
and these three were the manually deleted ones:
REPOSITORY TAG IMAGE ID CREATED SIZE
kadirahq/meteord latest 807754a01782 19 months ago 330.7 MB
meteorhacks/meteord base ac72fe65158b 24 months ago 330.7 MB
meteorhacks/mup-frontend-server latest 57be34f378cc 2 years ago 196.5 MB
And then after the deployment, I got 502 bad gateway, and the error log:
MongoError: failed to connect to server [mongodb:27017] on first connect
at Object.Future.wait (/bundle/bundle/programs/server/node_modules/fibers/future.js:449:15)
at new MongoConnection (packages/mongo/mongo_driver.js:211:27)
at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)
at Object. (packages/mongo/remote_collection_driver.js:38:10)
at Object.defaultRemoteCollectionDriver (packages/underscore.js:784:19)
at new Mongo.Collection (packages/mongo/collection.js:99:40)
at AccountsServer.AccountsCommon (packages/accounts-base/accounts_common.js:23:18)
at new AccountsServer (packages/accounts-base/accounts_server.js:18:5)
at meteorInstall.node_modules.meteor.accounts-base.server_main.js (packages/accounts-base/server_main.js:9:12)
at fileEvaluate (packages/modules-runtime.js:197:9)
so I tried to setup the docker again, but then I got the Start Mongo: FAILED error as following…
Started TaskList: Setup Docker
[52.198.56.43] - Setup Docker
[52.198.56.43] - Setup Docker: SUCCESS
Started TaskList: Setup Meteor
[52.198.56.43] - Setup Environment
[52.198.56.43] - Setup Environment: SUCCESS
Started TaskList: Setup Mongo
[52.198.56.43] - Setup Environment
[52.198.56.43] - Setup Environment: SUCCESS
[52.198.56.43] - Copying mongodb.conf
[52.198.56.43] - Copying mongodb.conf: SUCCESS
Started TaskList: Start Mongo
[52.198.56.43] - Start Mongo
[52.198.56.43] x Start Mongo: FAILED
-----------------------------------STDERR-----------------------------------
aaa2bad21b0a61e7cf25309fceb5342796e350fbd4d7c952f: container is marked for removal and cannot be "update"
Error response from daemon: Container 75e33a04e9148b3aaa2bad21b0a61e7cf25309fceb5342796e350fbd4d7c952f is not running
Error response from daemon: driver "aufs" failed to remove root filesystem for 75e33a04e9148b3aaa2bad21b0a61e7cf25309fceb5342796e350fbd4d7c952f: could not remove diff path for id a998ec41a675869581dd0a11311f77adb7e4d6dee9bf2fbec930b1717de07b09: error preparing atomic delete: rename /var/lib/docker/aufs/diff/a998ec41a675869581dd0a11311f77adb7e4d6dee9bf2fbec930b1717de07b09 /var/lib/docker/aufs/diff/a998ec41a675869581dd0a11311f77adb7e4d6dee9bf2fbec930b1717de07b09-removing: device or resource busy
docker: Error response from daemon: Conflict. The container name "/mongodb" is already in use by container "75e33a04e9148b3aaa2bad21b0a61e7cf25309fceb5342796e350fbd4d7c952f". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
-----------------------------------STDOUT-----------------------------------
3.4.1: Pulling from library/mongo
Digest: sha256:aff0c497cff4f116583b99b21775a8844a17bcf5c69f7f3f6028013bf0d6c00c
Status: Image is up to date for mongo:3.4.1
Running mongo:3.4.1
----------------------------------------------------------------------------
Does anyone know how to solve it? Any help will really be appreciated,
Thanks a lot!!
---------------------------------------UPDATE-------------------------------------------------
OK, after trying many times,
the problem was solved by stopping all the containers an reboot the server.
Not sure if it is a fix, but worked for me.