Upgrade to Meteor version 2.13 leads to all my AWS deployments failing

Please be aware of this, I might not be the only one which such problems. I’ve upgrade my backend app two days ago to Meteor 2.13 (from 2.12). I did some other commits and then deployed, since then my deployments fail on AWS and this is the memory metric since then:

As I did other commits and the Backend was running fine locally I didn’t think much about the Meteor upgrade.

Today, I also updated my Frontend app and the same problem happened with my deployments:

Not as drastic but it’s still not working.

I’ve tried to fix AWS and even opened a ticket with them (but it’s already the weekend here in Asia, so no answer yet).

I have reason to believe that it’s the new Meteor version that is causing it. I’m in the process of rolling back and will then deploy again.

1 Like

Hey, how are you building your images/apps?

We are running some apps in zCloud without problems so far. We are running with our open-source images.

Our images copy Node.js from Meteor installation.

1 Like

Thanks Filipe for your comment. I have sent you an email with the CI/CL script.

1 Like

I did a quick look using my cellphone but it seems you are not using Meteor patched Node.js 14 version.

This is the first release where Meteor fork of Node.js is actually being used with changes made by Meteor team.

btw, the first changelog was missing some changes (including the Node.js patch) then maybe you didn’t see it but I warned Meteor core team and they updated it

Anyway, I don’t think this is causing your issues because you are just running the same Node.js version from before. It could be some other change in Meteor code itself. I will investigate more and reply to your email as well as we are testing this new version of Meteor before recommending the upgrade for zCloud customers.

And check our open-source images, they can help you, I talked about them here.

1 Like

Here is AWS official technical support answer:

Stopped Reason: Essential container in task exited
Failure Code: EssentialContainerExited

Task contains 2 containers which exit with an exit code as:

  1. Name: datadog-agent
    Exit Code:0
  2. Name: backend
    Exit Code: 3

ECS Service: production-frontend(Task Definition Family: production_frontend, Task Definition Family Version: 641):-

Stopped Reason: Essential container in task exited
Failure Code: EssentialContainerExited

Task contains a single container which exit with an exit code as:

  1. Name: frontend
    Exit Code: 3

Therefore when any container exit with an exit code as 3 meant as “The system cannot find the path specified. Indicates that the specified path can not be found” and this issue is related to the application side. These both ECS service are inter connected to each other and both service containers are failed with an same exit code i.e. 3, due to some application issue. On this I would like to suggest you to check the application level logs to get more insights. Also currently both the services are in steady state with respective changes in the task definition version.

On ECS service memory utilization side for the backend service, I check on and before 4th Aug 2023 the memory utilization was around 50%, but there are some spike due to ECS deployment for the failed tasks. Although the utilization was normal during the period.

Opened the ticket on Friday evening, they took until Monday evening to respond on their Developer Support plan worth $30 or 3% of the total revenue per month.

All the while no user could access my app until I rolled back the upgrade but yeah, why should they care about small fry customer for 72h.

Discussed technical support availability with Filipe for his zcloud service and it seems that his company does care. So I guess small businesses like me really have to take a look at their infrastructure provider and decide which ones suits best. It seems AWS is only about raising prices and reducing the quality of their support. Can’t say this strategy doesn’t work, just take a look at their Q2 results: https://www.cnbc.com/2023/08/03/aws-q2-earnings-report-2023.html

1 Like

Yes, we do care.

It’s one of our pillars: amazing support

Anyway, getting back to the issue, were you able to isolate the problem?

I was thinking over the weekend: are you also deploying code changes or just the Meteor upgrade?

If you have both it would be nice to isolate them.

1 Like

I don’t have the time for infrastructure problems, nor to investigate. Based on what they say it might be the Docker image that I’m using (which has worked fine in the past).

Not sure which Docker image everyone is using.

I could see in the log files that it was showing the right node version (meaning the one referenced in the Meteor version) during Startup before it crashed.

Might be related: Upgrade to Meteor 2.13 leads to failure on node modules installation

Hey @a4xrbj1, I’m not sure if it would be using the right Node version on your deployment as we provided a new docker image with the Node version with security updates applied. Are you using this new one?

Node.js 14.21.4

As you have seen in this post in the forums by our CEO Fred Maia and in the previous release blog post, we are making security updates for Node.js. The PR which added the security updates is here. Here is a link to a Dockerfile where you can use the Node with the security updates.

If you have any concerns or doubts about this matter, please ping us with your question.

1 Like

Hi Fred,

no, I haven’t seen the link to the specific Docker image and therefore was using the “old” one that worked fine so far. I’m pretty busy at the moment so I won’t have time to test it but I think that the Docker image will solve the problem that AWS has indicated to me (“can’t find path”).



1 Like

I hope it solves this. Please let us know when you have checked it.

We should have communicated this change clearly. I will also add it to the migration steps section in the Changelog as it affects projects using Docker.

Mine started failing, then I noticed that node 14.21.4 doesn’t exist (?)

So when my scripts would extract the meteor node version, set the node version to 14.21.4 it would fail and never install the node modules.

So I made failover and now I check to see if the node version meteor claims to have actually exits (from Node), otherwise go down to the most recent minor version. For example:

If anyone is interested, I use nvm when setting my node version in AWS:

NODE_VER=14.21.4 // Extracted from meteor node version

// Extract major and minor version for fallback
const [major, minor] = NODE_VER.split(".");
const FALLBACK_VERSION = `${major}.${minor}`;

source ~/.bashrc\n
nvm install ${NODE_VER} || nvm install ${FALLBACK_VERSION}
[ "$(nvm current)" == "${NODE_VER}" ] && nvm alias default ${NODE_VER} || nvm alias default ${FALLBACK_VERSION}
1 Like

I still don’t understand. Can this version be installed in all environments the exact same way as 2.12?

Once you build your deployable bundle, you’re relying on your own installation of Node (unless you use Galaxy). That means you should be matching up the Node version you install on your Instance, Docker, etc. to the Node version that Meteor is set at.

Currently, for 2.13, the Node version is set to 14.21.4. However, 14.21.4 doesn’t (from what I can see) exist.

So, no, you can’t install Node version 14.21.4 to line up to your deployable Meteor package. You have to use something else, I would recommend 14.21.3.

Back-reading a few posts above will help

Hello, how are you?

I noticed that you are using NVM to install Node. The latest official version of Node 14 is 14.21.3, which means that version 14.21.4 is not available.

In this situation, I recommend updating your script to download a version of Node that is still being actively supported and maintained by the Meteor Core Team.

You can see an example of how to do this here in this script and customize for your use case.

To learn more about our extended support version of Node, visit the link below:

Best Regards,

Philippe Oliveira


Gotcha, @philippeoliveira , thanks for that.