Meteor Roadmap Update, August 27th, 2024

Good news, everyone! We just updated our roadmap doc, and we would love your feedback.

You can read it in full here:

Some of the highlights of what changed:

  • Meteor 3.0: Moved items to “Finished items.”
  • Performance Improvements: Addressing performance issues in Mongo/reactivity.
  • Change Streams: Switching to Mongo’s official reactivity API.
  • Vite & esbuild: Enhancing development speed and experience.

The new roadmap should help us make the code footprint of the core leaner and more efficient.

Right now, these are our top three priorities:

  1. Performance
  2. Change Streams
  3. TypeScript

Please write your suggestions below or open a new discussion on GitHub.

Please let us know if you see something you can work on. We would also appreciate it if you could help us test these new features!

12 Likes

I would like to see the ability to save collections locally and keep them in sync with the server, like the old GroundDB.

This is particularly useful for mobile apps where users expect quick startup times.

3 Likes

I’ve seen plans to possibly integrate redis-oplog into core. What do people think about potentially integrating Jetstream KV + NATS instead?

Ideally we would be able to gradually start from setting oplog on rails of Jetstream KV.
Unlike Redis, Jetstream KV is tightly integrated into Nats and naturally serves as event-stream for updates to KV and therefore subscribed collections.

One of major pitfalls of Meteor when weighting it against other potential solutions are performance, scaling and separation of concerns, compared to more enterprise-ready and well established (even if cumbersome and overcomplicated) approaches to real-time/pub-sub applications, based on Kafka, Rabbit, etc.

I feel like Meteor would open up to many scenarios, where versatility, modularity and future-proof planning are necessary in case of potential growth, after, arguably, unbeatable magic of Meteor gets you through rapid development. In addition, it would be much easier to connect meteor to pre-existing queues and data-sources within clusters, full of providers, backed by “Rabbits and Springs”.

Thanks for the update @leonardoventurini.

Can you provide more details on these two items:

  • Performance improvements for Meteor 3.0

After removing fibers, we became heavily reliant on async resources and consequently Async Hooks/Async Local Storage, which has a performance cost, we need to optimize that.

  • Bringing community packages to the core

Some packages are widely used and should be part of the core, which involves identifying and moving them to the core.

It would be helpful to know the performance impact of the new async additions based on your benchmarks.

Also, regarding community packages, which ones are you planning to include in the core?

About Redis Oplog

I added questions about this to here:

Have you tried GitHub - jamauro/offline: An easy way to give your Meteor app offline capabilities and make it feel instant?

3 Likes

No, but thanks for bringing it to my attention. I see it’s just over 1 month old, so it wasn’t around when I last looked!

1 Like

Are there plans to drop reify? If we use Vite for client bundle, we won’t have nested imports support anymore, right?

If that is the direction going forward, the community might need a more formal communication as this impacts a lot in shared code and loading order.

2 Likes

I think the Vite change is key to make Meteor survive long term. I’m very happy that we are going that route, otherwise I was seriously thinking about moving away from Meteor; the current build times are horrible, no tree shaking, etc. Great move, even if we lose the nested imports, which don’t follow JavaScript standards anyway.

I love the new roadmap. Looking great!

1 Like

I too think that using Vite on client side would be a really good step for Meteor. However, for other reasons. Faster build times and tree shaking are of course very nice. But for me, what is even more important, is that almost all widely used front end libs already integrate well with Vite and so it would hopefully significantly decrease the effort required to support different front end libs in Meteor. I would like to see Meteor’s team to have to deal less with standard problems like this and have more time to work on stuff that is actually core to Meteor itself.

Nested imports I do not use. But having features that deviate from js standards is a slippery slope anyway that comes with a cost. So for me, personally, losing it would not be a problem.

I am currently bringing a project to Meteor 3 with GitHub - JorgenVatle/meteor-vite: ⚡ Replace Meteor's bundler with Vite for blazing fast build-times and the experience has been very nice this far.

4 Likes

Are there plans to drop reify? If we use Vite for client bundle, we won’t have nested imports support anymore, right?

I would also like to know the impact on dynamic imports, many of our apps use it as the main technique to reduce bundle size

1 Like

We are open to dropping certain features if the upside is more significant than the downside. There is still need for research and discussing it further.

My hunch is that we need to move away from things that are not spec-compliant, which might include nested imports, and favor dynamic imports instead, let’s say; the key is doing it gracefully.

But there are things we can try to preserve it too, like plugins etc. When the time comes, rest assured that this will be validated and communicated widely.

Would you be willing help us research it? I believe Vite can benefit us a lot, and many things, like tree-shaking would come for free.

2 Likes

Dynamic imports shouldn’t be affected, e.g.

const { something } = await import('./somewhere')

Actually, dynamic imports would be improved from my perspective because they’d be served via HTTP vs DDP. See Nacho’s note here.

I assume nested imports likely would be affected, e.g.

if (Meteor.isServer) {
  import { something } from './somewhere'
}

Though I think I recall Nacho mentioning that code in Meteor packages may still be able to use nested imports.

Maybe there’s still a way to keep nested imports for normal app code too, but if not, I still think the benefits of Vite outweigh this downside.

1 Like

We were building our performance setup and we noticed, initially that the numbers were slightly better in Meteor 3 vs Meteor 2.

For methods, it showed ~28% speed increase, ~10% lower CPU usage, and ~16% less RAM consumption on standard server execution, likely due to the Node upgrade itself.

However, in reactive flows, we noticed that after 4 concurrent connections (in our extreme testing scenario, it can only be relative to our testing setup the Meteor 3 app started to not respond at all, while Meteor 2 would still work fine. We also received some reports from companies facing the same problem when deploying to production, especially when using Publish Composite.

Clarification on our testing scenario: 4 concurrent connections, per second, over a minute, and doing a process that triggers reactivity incrementally on connected users. This is a very extreme scenario and it won’t affect most of the applications.

After writing a custom async resource profiling logic I noticed that we are producing many thousands of async resources, and the first thing that came to mind was the context from ALS/bindEnvironment, since it needs to be allocated and then deallocated, so should apply big pressure on GC and even on the event loop. But it was mainly conjecture, now I believe it is still something we can optimize, but not the main problem.

Now the good news:

Yesterday after implementing a custom logic with an event loop monitor I could trim down and focus on the resources created only when the event loop is lagging, turned out to point to a big portion of the observer/multiplexer logic and also zlib which is used by permessage-deflate for sockjs message compression. Disabling compression seems to fix the issue we noticed and significantly reduce CPU/memory.

As permessage-deflate is old, possibly it didn’t age well as Node.js evolved. We will be releasing a beta completely disabling it, and then research if we need to add it back. I believe the benefits outweigh any potential cost with bandwidth and we can find a better approach in the future.

We will publish the numbers in a few days as we finish validating the changes, but at first Meteor 3 might be way faster than Meteor 2, after all. And that’s without optimizing the observer/multiplexer logic, and there is a lot of room for improvement there, I believe.


About the packages that will be brought to the core, to my knowledge, are: Roles, Apple Oauth, Migrations, and possibly Collection 2 or some features within it. If you have suggestions let us know.

9 Likes

I tested the initial betas and found Meteor 3 faster than Meteor 2, likely due to Node.js improvements.

However, my tests used Express endpoints, not publications or code that would trigger observer/multiplexer logic.

If you want to run large-scale tests like this, let me know. We have this set up already, as we’ve done it for many clients, including Method calls and other Meteor-specific features.

1 Like

Our plan at quave is to be as package-independent as possible.

If our plans align with core plans, we’re open to using core packages for obvious reasons.

  • Roles: We don’t use it, but many clients do. Migrating to core would be good.
  • Apple OAuth: Already a quave package and the most up-to-date.
  • Migrations: We use our own version, especially with Meteor 3.
  • Collection 2: We use our own version too.

We’re willing to collaborate and add these as core packages. Send details and deadlines if you want to work together.

SyncedCron would also be great to add to the core.

These packages are essential for any classic Meteor project.

This was my list of packages to add, plus redis-oplog, when I was running Meteor (the company).

One question: What’s the reason for this? Is the goal to include all popular packages or is there another motive? More transparency on the ‘Why’ would be helpful.

2 Likes

@leonardoventurini

Is the plan to bring all Auth to Meteor core?

RocketChat has many auth packages.

WeKan uses older version of ldapjs npm package. Ldapjs is not maintained anymore.

WeKan has fork of OAuth2/OIDC at wekan-accounts-oidc and wekan-oidc directories for Auth0, ADFS 4.0, Azure AD B2C, Google, RocketChat, GitLab, NextCloud and Zitadel. There is also package for Sandstorm login.

List of various login methods is at right menu topic “Login Auth” at wiki:

https://github.com/wekan/wekan/wekan/wiki

Also there is plans to add SAML and CAS, but any existing code is still Meteor 2 only I think. Question is, how to port to Meteor 3.

If Meteor would have most login methods at core, then it could be similar like Ruby on Rails that has OmniAuth packages for various auth methods:

Replacing the embedded babel+reify that is part of isobuild with something else might be good in the long run but “here be dragons”.

reify generates wrappers and extra code for every code file in a meteor application and enables some additional features:

  • nested imports (a proposed addition to javascript that never went anywhere)
  • dynamic imports
  • top-level await
  • async-tagged code blocks that work well with fibers (meteor < 3)

If you’ve never looked into the generated code, here’s an example of what reify does (meteor 3.0.1).
Note all the module calls which are part of the reify/meteor runtime: module.link, module.wrapAsync, etc

original code:

import axios from "axios";

const getAndrei = async () => {
  const response = await axios.get('https://api.github.com/users/alisnic')
  return response.data
};

const Andrei = await getAndrei();

export default Andrei;

Generated code inside app.js:

!module.wrapAsync(async function (module, __reifyWaitForDeps__, __reify_async_result__) {
  "use strict";
  try {
    let axios;
    module.link("axios", {
      default(v) {
        axios = v;
      }
    }, 0);
    if (__reifyWaitForDeps__()) (await __reifyWaitForDeps__())();
    const getAndrei = async () => {
      const response = await axios.get('https://api.github.com/users/alisnic');
      return response.data;
    };
    const Andrei = await getAndrei();
    module.exportDefault(Andrei);
    __reify_async_result__();
  } catch (_reifyError) {
    return __reify_async_result__(_reifyError);
  }
  __reify_async_result__()
}, {
  self: this,
  async: false
});
2 Likes

All? No, that is unrealistic, not to mention a hell to maintain. You are reading too much into it. I can foresee strategic expansion like Apple OAuth. LDAP and SAML could make sense in the future.
There is a difference between core and community maintained auth packages. Core are should be things that almost everyone is going to use or will be required in certain contexts (like Google and Apple OAuth) and then infrastructure to build additional auth packages if you need them (OAuth core packages).

We want to provide everything the user needs to have a complete experience, while at the same time moving away from maintaining things we don’t need to maintain.

I also believe some centralization for such packages would make sure they are well maintained and documented.


Our tests show the same thing, also for methods. Reactive flows however, show some degradation, which ends up affecting reactivity dense/large scale apps.

Right now we recommend using SERVER_WEBSOCKET_COMPRESSION=false if your app fits that category, as it gives a lot of breathing room. We have opted to not release a beta fully removing it as some users might still benefit from compression.

The problem I believe is in the sheer amount of calls the mongo logic triggers, it can trigger many thousands of tasks and async resources in just a few seconds, combine that with ALS/AH, publish composite or polling, you can get an explosive combination.

Both Meteor 2 and Meteor 3 are prone to oplog flooding, however Meteor 3 seems be more easily affected, and also it is reported to show increased CPU in normal levels of reactivity.

In essence, the observer/mongo logic is a strong candidate for further optimization, but there might be some tradeoffs we need to make regarding bindEnvironment or whatever else relies on ALS.

It is very hard to “prove” that ALS is the culprit because we can’t easily turn on/off, we can experiment with it in one place or the other. In one experiment I did for removing it from AsynchronousQueue tasks I got a reduction of 20mb in RAM use, but no apparent CPU change, and it is a very simple use case. I assume those 20mb freed GC from running too. Multiply that by 10/20x…

This needs further research, and I would appreciate help with it.

4 Likes