As everybody knows, we’ve been working hard to improve Meteor’s performance. Recently, @nachocodoner implemented SWC and has also been advancing our RSPack integration.
On my side, I’m working on an upcoming change that will significantly reduce memory usage by replacing the oplog with a Change Streams–based implementation, along with a new EventEmitter-based DDP transport (WIP) that promises replace the pooling. I’m also setting up full OpenTelemetry instrumentation to measure and validate these improvements (more news coming soon!).
Beyond that, we’ve had some interesting discussions around replacing EJSON with CBOR and a PR about a BSON-to-JavaScript transformation approach to in the oplog(maybe in pooling as well, didnt check it)
A few additional points:
There are many opportunities to optimize Meteor’s JavaScript for V8. If you run TOOL_NODE_FLAGS="--trace-deopt" meteor, you’ll see a long list of unoptimized functions we could improve.
I started cleaning up unnecessary Promise overhead (article/ PR)
So I’d like to open this topic by inviting everyone to share ideas on how we can keep improving Meteor’s performance and reducing resource usage.
[Proto]
One advantage of using Protobuf in Meteor is that we’re full-stack, so developers don’t need to worry about Protobuf-specific knowledge. They only need to work with JS/TS types, and we can handle everything behind the scenes.
I was looking with chatgpt into this some time ago and conclusion was that it’s a big undertaking so stopped right there. uWebsockets are very very very performant, but it’s complete reimplementation with different api, so it’s not really compatible with nodejs ws or socksjs and would require deep refactoring in meteor code. But I don’t really know much about meteor.js internals, maybe it’s not as bad as chatgpt says. I just put this idea out as potentially interesting.
There were also some discussions about maybe moving to bun (and bun I think uses uWebsockets underneath) so if refactor is big anyway maybe that could be a move. Of course this is just brainstorming for now.
Protobuf needs to be de-serialized, I am not sure it would be more performant than simple json. I would really research and test this before investing into implementation. Maybe there are some new browser api’s that could do deserialization natively idk, but if done in js it probably won’t give any big wins and introduce complexity.
For those who want to work on it, understand DDP pkg could be a good point of start, here and here you can find a guide
@ignl but what do you think to do an benchmark comparing ws, sockjs and uws and talk about the diffenreces between them, it will be a good content for the dev community and for who want to do a poc like this. It will a required knowledge for who want to work on it
I’ve been working on the benchmarking side of things and built a framework on top of the existing meteor/performance repo that could help with some of the questions raised here (measuring the impact of change streams, etc.).
A bench.js CLI that runs Artillery+Playwright scenarios against any Meteor checkout, collects CPU/RAM/GC metrics, compares two branches and detects regressions
GC tracking via Node.js perf_hooks (zero overhead when disabled)
A Blaze dashboard on Galaxy to visualize runs, compare branches, and track trends over time
GitHub Actions workflows for automated PR and nightly benchmarks
There’s already data in the dashboard for release-3.2 through release-3.5 and devel. For example, devel vs release-3.5 shows nearly identical performance on the light scenario, with devel having slightly better GC behavior (−31% max pause, −34% major GC time)
This is still very much a draft/experimental setup — benchmarks are running on shared GitHub Actions runners (no dedicated VM), so the numbers have some variance. But the framework itself is functional end-to-end: run, compare, push to dashboard, CI automation. Happy to iterate on it if the team finds it useful
This was exactly the longer-term vision I expected for meteor/performance: making it more dynamic and allowing easy checks between branches, different app examples and setups, multiple metrics, and so on.
Still, the app there is really basic and does not cover many real-world scenario behaviors (I will continue and intent here - I will try with Claude, I invoke you! ). But we can scale the performance app overtime, and with that the benchmark suite UI will be a good asset to compare changes and ensure we do not go backwards.
This dashboard opens also the possiblity to test the bundler scenario with meteor profile and check over versions we don’t go backwards in those terms.
I completely support the direction of your changes.
benchmarks are running on shared GitHub Actions runners (no dedicated VM), so the numbers have some variance.
Don’t worry we can improve that later with dedicated machines to get more stability on numbers, the idea is the direction to go.
Nice! This is something I was trying to do with OpenTelemetry, but you’re following a better path than I was. I really loved the live dashboard; I can definitely see it in our release pipeline one day.
I have a few questions:
Is the server running in the same environment as Artillery? I’m asking because a few users reported a CPU decrease in production from 3.4 to 3.5 (check here).
Do you have plans to test only the backend, like this artillery setup does?