Exploring an ESM server bundle format for Meteor (meteor build --format=esm, Node + Bun)

Hi folks,

Quick disclaimer before anything else: this whole experiment was 100% vibe coded :sweat_smile:
So please read this as an exploratory fork / prototype branch, not as a polished proposal or production-ready implementation.

We’ve been exploring whether Meteor could generate a more standard server bundle format instead of always going through the current main.js -> runtime.js -> boot.js path.

What started as a Bun experiment turned into something more interesting:

not β€œcan Bun emulate old Node internals well enough?”
but
β€œcan Meteor produce a built server bundle that modern runtimes can execute more directly?”

So we built an experimental fork branch that adds:

meteor build --format=esm --directory ../output

Branch:
https://github.com/dupontbertrand/meteor/tree/spike/esm-bundle-format

All the .md files created during the brainstorming / prompting / architecture exploration are also in the fork here:
https://github.com/dupontbertrand/meteor/tree/spike/esm-bundle-format/meteor-sandbox/migration-bun

This generates an experimental ESM server bundle path with index.mjs, instead of the legacy server entrypoint chain

What was validated

We first built a manual spike, then integrated it into the bundler, then closed the loop with validation

Current status:

  • legacy meteor build output remains intact
  • --format=esm reproduces the spike behavior
  • Node runs the generated ESM bundle
  • Bun runs the generated ESM bundle
  • HTTP works
  • Mongo works
  • DDP works
  • accounts-password works
  • reconnect works
  • short soak test passes

We also added consolidation tests, and the current result is:

13/13 passed on both Node and Bun

Most important finding

The main result is not β€œBun is faster”.

The main result is:

the server bundle format can be moved toward a more standard loader path without rewriting all of Meteor.

And for Bun specifically, the real runtime-specific seam turned out to be:

  • server host
  • DDP transport

not the package bundle itself.

What this is not

This is not a proposal to merge Bun support into core tomorrow.

It is also not a claim that the current experiment is production-ready.

This is an experimental fork proving that:

  • Meteor can generate an alternative server bundle format
  • Node and Bun can both execute it
  • the long-term seam is probably bundle format + host/transport abstraction, not β€œmake Bun emulate every historical Node behavior”

Why this seems interesting

Even if Bun support never ships, this still seems useful because it suggests that Meteor’s current server runtime shape is not the only viable one anymore.

In other words:

this might be less about β€œMeteor on Bun”
and more about
β€œMeteor with a more standard server bundle format”

Next step

At this point the experimental branch and validation loop are done

The next logical step would probably be deciding whether this is worth:

  • keeping as a fork experiment,
  • turning into a more formal RFC-style discussion,
  • or exploring further around the bundle format / host seam.
3 Likes

Update: Full validation results + Bun runtime benchmarks

Since the initial post, this spike has gone through rigorous validation and expanded into a Bun-native runtime path. Here’s everything we found.


What changed since v1

The ESM loader has been hardened:

  • Real per-package Assets β€” getTextAsync, getBinaryAsync, absoluteFilePath read actual files from the bundle (not stubs). Per-package asset context, matching legacy boot.js behavior.
  • Native source maps β€” via process.setSourceMapsEnabled(true) (Node 20+), replacing the legacy source-map-support monkey-patch.
  • Input validation β€” --format=invalid rejected with clear error message.
  • Documented limitations β€” METEOR_INSPECT_BRK and METEOR_PARENT_PID not supported (use standard Node flags).

Compatibility validation

Tested on a meteor create --full app with accounts-password + email added. 74 packages in load order, 14 distinct Npm.require patterns (including subpaths like nodemailer/lib/mail-composer, mongodb/package.json, native addons).

Test                        Legacy              ESM                 Match
────────────────────────    ──────────────────   ──────────────────  ─────
HTTP 200                    pass                pass                yes
HTML boilerplate size       1713 bytes          1713 bytes          yes
Client JS bundle hash      identical           identical           yes
Static JS/CSS served        200                 200                 yes
DDP connect                 pass                pass                yes
DDP method call             pass                pass                yes
DDP subscription            pass                pass                yes
Assets.getTextAsync         returns content     returns content     yes
Mongo insertAsync           pass                pass                yes
Mongo find + fetchAsync     pass                pass                yes
Publication                 2 docs received     2 docs received     yes
Soak 1min 20 clients        22.8% timeout       23.1% timeout       yes

The soak test error rate is identical β€” the timeouts are from the test harness (5s timeout on Mongo ops under load), not from Meteor. Both formats behave the same under sustained load.


Performance: Node legacy vs Node ESM (30 runs, trimmed mean)

App: meteor create --full + accounts-password + email.
Machine: ThinkPad P52, Linux 6.8.0, Node 22.22.0.

Metric              Legacy          ESM             Delta
──────────────────  ──────────────  ──────────────  ──────────────────
Cold start          1,017 ms        1,041 ms        neutral
HTTP boilerplate    860 req/sec     725 req/sec     neutral (high var)
DDP mean latency    0.38 ms         0.44 ms         neutral (high var)
DDP throughput      2,640/sec       2,332/sec       neutral (high var)
RSS memory          252 MB          229 MB          -9% (stable)

Honest assessment: On Node alone, the ESM format is performance-neutral β€” no significant gains, no significant regressions. HTTP and DDP numbers oscillate between runs (machine load, GC timing). The only stable result across 10/20/30 run campaigns is memory: ESM uses ~7-9% less RSS consistently.

The value of the ESM format on Node is architectural, not performance:

  • Boot chain: 1 file (9 KB) replacing 8 files (52 KB)
  • No vm.runInThisContext, no Reify runtime, no Module.prototype patching
  • Standard import() instead of a hand-rolled module loader

Where it gets interesting: Bun runtime

The ESM format is the foundation for a Bun-native host (branch 3). This is where the real gains appear.

Architecture:

Bun.serve(:PORT)
  β”œβ”€β”€ Static files β†’ Bun.file() (zero-copy sendfile)
  β”œβ”€β”€ Boilerplate β†’ WebAppInternals.getBoilerplate() direct
  β”œβ”€β”€ WebSocket β†’ BunSocket adapter β†’ StreamServer
  └── Other routes β†’ Express via Unix socket (transitional)

All 4 templates tested (--bare, --minimal, --blaze, --full): HTTP, DDP, MongoDB β€” everything works on Bun. Zero Node process.

Bun benchmarks (vs Node legacy)

Metric                  Node legacy     Bun ESM         Delta
──────────────────────  ──────────────  ──────────────  ──────────
HTTP boilerplate        884 req/sec     2,146 req/sec   +143%
HTTP static JS 800KB    732 req/sec     3,419 req/sec   +367%
HTTP static CSS 1KB     1,556 req/sec   17,304 req/sec  +1012%
DDP roundtrip mean      0.49 ms         0.13 ms         3.8x faster
DDP roundtrip P95       0.82 ms         0.19 ms         4.3x faster
DDP sequential          2,062/sec       7,735/sec       3.75x
RSS memory              305 MB          191 MB          -37%
Cold start              1,005 ms        691 ms          -31%

Realistic workload (multi-client, mixed ops)

Scenario            Clients x Ops   Node legacy     Bun ESM         Delta
──────────────────  ─────────────   ──────────────  ──────────────  ──────
Small team          10 x 20         3,180/sec       6,285/sec       +98%
Typical SaaS        50 x 10         3,843/sec       14,832/sec      +286%
Busy dashboard      100 x 5         4,754/sec       12,893/sec      +171%
Traffic spike       200 x 2         9,420/sec       23,714/sec      +152%

Stability

5-minute soak test, 20 clients, mixed operations (methods + subscriptions + pings):

  • 108,922 ops, 0 errors, 0 reconnects
  • Throughput constant: 362-364 ops/sec throughout
  • RSS stable at 179 MB
  • 20/20 clients active at end

What this means

The ESM format itself is a small, low-risk, opt-in change (6 files, ~500 lines, flag-gated). On Node it’s performance-neutral with slightly less memory usage.

But it’s also the prerequisite for a Bun runtime path that delivers 2-4x improvements on the metrics that matter most for Meteor apps: DDP latency, WebSocket throughput, memory footprint, and static asset serving.

The Bun host is 201 lines of code on top of the ESM format. The gains come from Bun’s native WebSocket (vs SockJS), Bun.file() zero-copy serving (vs Express send module), and direct boilerplate generation (vs Express middleware stack).


Code

Three branches, each building on the previous one:

  1. feature/esm-bundle-format β€” meteor build --format=esm, ESM loader, tests, Assets support. Node only.
  2. feature/esm-bun-compat β€” bootPackages/runMain split so the same ESM bundle boots on both Node and Bun.
  3. feature/bun-only-host β€” Bun.serve() host, Bun.file() static serving, BunSocket DDP, benchmarks, soak test.

Detailed benchmark results: bun-only/bench/RESULTS.md

Happy to answer questions or share more details about any of the findings.

2 Likes

Update 2 β€” meteor run --runtime=bun and DDP transport benchmarks

The previous update focused on the ESM bundle format and production-mode Bun hosting. This one covers the dev-mode experience: running a standard Meteor app with meteor run --runtime=bun and comparing DDP transport performance head-to-head.

What’s new

The feature/bun-only-host branch now supports meteor run --runtime=bun. A --full app (Blaze + accounts-password + Mongo) runs in the browser with native WebSocket DDP β€” no SockJS, no polyfills.

Core changes (6 files, net -63 lines):

  • ddp-server/stream_server.js β€” SockJS removed entirely, replaced by the ws library in noServer mode listening on the http upgrade event. Same socket contract (EventEmitter + send/close/headers/remoteAddress), just no SockJS framing overhead.
  • ddp-server/package.js β€” sockjs + permessage-deflate2 + unused lodash.once out, ws: 8.18.0 in.
  • tools/runners/run-app.js β€” --runtime=bun spawns Bun with index.mjs, guards IPC calls (Bun doesn’t support Node’s IPC channel).
  • tools/cli/commands.js β€” --runtime option on meteor run.
  • packages/rspack/rspack_server.js β€” disables WebSocket proxying in http-proxy-middleware on Bun (Bun’s http.createServer doesn’t expose req.connection.server).
  • esm-loader.mjs β€” sets DISABLE_SOCKJS runtime config when running on Bun, so the client uses native WebSocket.

DDP transport benchmarks

Methodology: same bench suite as PR #14231 (bench-connect, bench-rtt, bench-throughput-v3, bench-pubsub). Machine: ThinkPad P52, Linux 6.8, Node 22.22.0 / Bun 1.2.4, MongoDB 7.0.14.

Run 1 β€” Production bundles (meteor build + standalone server)

                        Node/SockJS     Bun/native WS   Delta
────────────────────    ──────────────  ──────────────   ──────────
Connect (1 client)      3.0 ms          1.8 ms           -40%
Connect (100 burst)     54.9 ms         20.6 ms          -62%
RTT echo p50            0.333 ms        0.088 ms         3.8x faster
RTT echo p99            0.825 ms        0.298 ms         2.8x faster
RTT 1KB p50             0.502 ms        0.236 ms         2.1x faster
RTT 1KB p99             1.181 ms        1.615 ms         +37% (tail)
Throughput (50 cli)     8,333/sec       23,438/sec       2.8x
Pub/sub (50 subs)       28.5 ms         49.3 ms          +73% (slower)

Run 2 β€” Dev mode (meteor run vs meteor run --runtime=bun)

                        Node/SockJS     Bun/native WS   Delta
────────────────────    ──────────────  ──────────────   ──────────
Connect (1 client)      4.5 ms          3.0 ms           -33%
Connect (100 burst)     71.5 ms         42.0 ms          -41%
RTT echo p50            0.354 ms        0.142 ms         2.5x faster
RTT echo p99            0.855 ms        1.250 ms         +46% (tail)
RTT 1KB p50             0.569 ms        0.283 ms         2.0x faster
RTT 1KB p99             1.064 ms        1.539 ms         +45% (tail)
Throughput (50 cli)     9,688/sec       29,226/sec       3.0x
Pub/sub (50 subs)       3.5 ms          3.7 ms           neutral

Stable findings across runs

Metric Consistent? Range
Connect latency yes, faster -33% to -62%
RTT p50 yes, faster 2.0x to 3.8x
Throughput (50 clients) yes, faster ~3x (23K-29K vs 8K-10K/sec)
p99 tail latency yes, higher +37% to +46%
Pub/sub fan-out inconsistent slower in prod, neutral in dev

For context, here’s where Bun native WS sits relative to the transports benchmarked in PR #14231:

Transport       RTT echo p50    Throughput (50 cli)
─────────       ────────────    ───────────────────
uws             0.232 ms        14,300/sec
faye            0.267 ms        10,911/sec
sockjs          0.336 ms        8,156/sec
ws              0.391 ms        7,367/sec
Bun native      0.088-0.142 ms  23,438-29,226/sec   ← this branch

Bun native WS: +64% to +104% faster than uws β€” the fastest transport tested in PR #14231.

Trade-offs to investigate

  • p99 tail latency is consistently ~40% higher on Bun. Likely related to Bun’s per-socket message scheduling under burst. Not a showstopper for typical workloads but worth understanding.
  • Pub/sub fan-out varied between runs. The prod run showed +73% slower, dev run was neutral. Needs more controlled testing.

What’s next

  • Run these benchmarks on the bench dashboard infrastructure for more reproducible results (controlled hardware, automated runs)
  • Investigate p99 tail and pub/sub variance
  • Test edge cases: OAuth, dynamic-import, large publications
  • Profile Bun’s event loop under sustained pub/sub load

PRs

  • #14311 β€” --format=esm option for native ESM server bundles (open, ready for review)
  • #14312 β€” bootPackages()/runMain() split for dual-runtime support (draft)
  • #14313 β€” meteor run --runtime=bun with native WebSocket DDP (draft, for benchmark access)
1 Like