How can Meteor position itself for the LLM age?

How can Meteor position itself for the LLM age?

Hey everyone,

Over the past few weeks I’ve been noticing scattered conversations across GitHub, the forums, and the broader dev ecosystem about Meteor and AI. Each thread touches on a different angle, but together they start to paint a bigger picture: ** How can Meteor position itself for the LLM age?**

This post is an attempt to gather everything in one place, spark a broader discussion, and hopefully turn some of these loose threads into a real direction for the framework.


What we already have

llms.txt , AI-ready documentation

The Meteor docs ship with an llms.txt file following the llmstxt.org convention by @grubba . You can download the full docs as a single text file (curl https://docs.meteor.com/llms-full.txt) and use it with any LLM tool, LM Studio, Claude, whatever you prefer. A small but important step that makes Meteor a first-class citizen in AI-assisted development workflows.

Cross-platform AI agent context for the Meteor source (PR #14116)

Merged in February 2026, @nachocodoner adds a structured AGENTS.md / CLAUDE.md documentation to the Meteor repo itself, so AI coding assistants can understand the codebase when contributing to Meteor core. The approach is token-efficient: a small root file loaded on every request, with a set of skill files under .github/skills/ that agents load on demand depending on the task (testing, build system, conventions, etc.). A great pattern that could also be adopted by Meteor app projects.


The conversations happening right now

Meteor needs server-to-client streaming , especially for AI (Forums)

In the Meteor 3.5-beta thread about Change Streams, an important piece of feedback was raised by @msavin : Meteor is still missing a clean, first-class way to stream data from server to client. The commenter explicitly calls this out as critical for AI use cases, since every major AI service now streams its results token by token. You can work around it today using WebApp, but there’s no neat solution with Meteor.user(), permissions, and subscriptions all playing nicely together. This is a real gap.

Meteor could be much more , and WebMCP is part of that (GitHub Discussions #13818)

In a broader discussion about Meteor’s future, the idea came up that making any Meteor server MCP-capable by default could be a genuine differentiator in the LLM era. One interesting angle by @dchafiz : Chrome’s recent introduction of WebMCP , an in-browser MCP implementation , could allow Meteor Methods to become natively discoverable by AI agents without any extra setup. If Meteor shipped WebMCP support out of the box, with certain methods auto-discoverable, it could be a compelling story for building agentic web apps.

Building an MCP for Meteor (Forums)

@dchafiz is already building meteor-mcp , an MCP server designed specifically for Meteor apps development, providing complete Meteor.js v3 API docs, code examples, and architectural guides to AI coding assistants like Claude, Cursor, and Windsurf. It’s a community-driven effort aimed at making Meteor the go-to choice for vibe-coders. Worth watching and contributing to.

The idea: make any Meteor server an MCP server by default

A specific proposal I land in the discussions: what if meteor run gave you an MCP server out of the box? Certain Methods could be auto-discoverable by AI agents, turning every Meteor app into something that LLMs can interact with natively. There’s healthy debate about whether MCP is the right protocol right now vs well-documented REST/DDP, but the underlying idea , that Meteor could be LLM-native by default , feels worth exploring seriously.

Agent instructions by default on project creation (Forums)

Motia, another framework shared by @paulishca , now generates an AGENTS.md file automatically when you scaffold a new project, making every new project AI-agent-ready out of the box. The question for us: should meteor create do the same? Shipping a sensible default AGENTS.md with every new Meteor project , with correct Meteor 3 patterns, async/await usage, and links to the llms.txt , would directly address the version confusion problem and be a low-effort, high-signal move.


What other frameworks are doing around AI

It’s useful to look at what the rest of the ecosystem is building to understand where the gaps are and where Meteor could leapfrog.

Vercel AI SDK

Vercel has built a TypeScript toolkit specifically for integrating LLMs into apps , standardizing text generation, structured output, tool calling, and streaming across providers like OpenAI, Anthropic, and Google. The SDK includes AI SDK UI, a set of framework-agnostic hooks for building chat interfaces and generative UIs with real-time streaming out of the box. It works with Next.js, SvelteKit, Nuxt, Svelte, and others. Notably absent from the getting-started guides: Meteor. This is a gap we could fill with a community integration.

LLM-first Web Framework (Minko Gechev)

A thought-provoking post from an Angular core team member exploring what a framework designed from the ground up for LLMs would look like. The key insight: LLMs perform best with opinionated, minimal APIs , a single way to do things, less surface area to “learn”. He also highlights two core problems that affect all frameworks today: API version mismatch (LLMs generating code for outdated versions) and lack of training data for newer patterns. Meteor has both of these challenges. His proposed solution , including full framework context in the LLM context window , is essentially what our llms.txt and AGENTS.md work is already doing.

Vinext: Agent Skills for framework migration (Cloudflare)

Cloudflare built Vinext , a full reimplementation of the Next.js API surface on Vite, built by one engineer with AI in a week. What’s most relevant for us is the migration pattern they introduced: Vinext ships with an Agent Skill that any AI coding tool can install with npx skills add cloudflare/vinext. The skill understands the compatibility surface, handles dependency changes and config generation, and flags what needs manual attention , all inside your existing AI coding assistant (Claude Code, Cursor, Copilot, etc.). This pattern could be exactly what Meteor needs for the v2 → v3 migration path. Imagine npx skills add meteor/migrate-to-v3. The tooling exists today.

Thesys React SDK , Generative UI

Thesys is building a React SDK that turns LLM responses into live, interactive user interfaces in real time , no copy-paste, no dev server restart, no manual wiring. They call this “Generative UI”: an interface that assembles itself dynamically based on LLM output, rendered on the spot. It’s an early but important signal that the frontend layer itself is becoming AI-driven, and frameworks need to support streaming, reactive state, and real-time UI updates natively to play in this space. Meteor’s reactivity model is arguably a better fit for this than Next.js, if we build the right bridges.


Thoughts?

Would love to hear thoughts , especially on the streaming gap, the migration skill idea, and the WebMCP angle. Who’s working on what, and where should we focus first…

7 Likes

I would be really interested and curious to study this further. I have multiple cases which require or could use some AI connectors.

1 Like

All of these sound really good. I would maybe hesitate to adopt any “opinionated” features like Generative UI tools into the package, these things are trendy, very imperfect, change very fast, maybe IDE is the right place for it, etc. And then maybe lots can be lifted via NPM packages.

In terms of core stuff, MongoDB Vector Search is interesting too, I’ve seen some cool uses of it, I think it works just fine with Meteor as-is but maybe something special could be built around it.

3 Likes

Shared by @nachocodoner via dm, rspack is doing agent skills for many fronts https://x.com/rspack_dev/status/2027268579973058874

The vector search in MongoDB does not replace a vector DB. I would not rely on it for AI vector operations.

Hmm what would you recommend?

1 Like

For vectors I would use Qdrant (or Chroma) with a small embedding LLM. Workflows for embedding are easy to build both locally and in the cloud. It is important to use a model specialized in vector embeddings, and in general they have a small footprint both size and compute.

Screenshot 2026-02-28 at 11.55.16 AM

For graphs I would most probably go with Neo4j (community edition), on high traffic I would endorse it with a Memgraphdb (like putting Redis in front of MongoDB). Nebula is also on the list for me.

I did this comparison in the past at a time I started the project I currently work on. This is what I got for Qdrant vs MongoDB on vectors:
https://apps.abacus.ai/chatllm/share/conversation/b1d6519ec:1090269468

What I plan to do, non-vector data as usual in MongoDB and only vectorize data that I need vectorized.
Example in ecommerce: all e-Commrce data is in MongoDB, showing it on (filtered) searches comes from MongoDB. If you search for a t-shirt, you get a t-shirt, not office shirts. But in product suggestions I need the vectors because with t-shirt I would suggest some casual pants with a color match to the t-shirts you see on the screen or with a selected t-shirt.

There is also another workflow where you have ephemeral data such as a job with applicants. The job is filled and you no longer need the matching data. You vectorize all the cv in the application and the job description and once the recruitment is complete you can just delete the whole data. This could go nicely into an in-memory Vector DB. Data about the job, the best candidates, the CVs, candidate info etc may still remain in MongoDB as required by the platform concept, but you can probably delete the matching phase. I wouldn’t bloat my MongoDB with these vectors, it sounds a bit like a microservice kind of thing.

2 Likes

Personally, on all my projects I’ve made a nice “documentation” folder with the Meteor LLM docs and a Blaze doc that I “created” (and by the way, we should think about making one too like the meteor one for llm). Then I just have to ask Claude to refer to that documentation before each implementation if it needs to. I also included the docs for the libraries I use in my projects—Chart.js, Gridstack, etc. On top of that, I keep documentation updated as I go based on my conversations with Claude—common issues it runs into, for example:

1.8 Standalone {{#if}}attribute{{/if}} silently breaks Blaze compilation

When a Blaze block helper ({{#if}}, {{#unless}}) outputs a standalone HTML attribute (like selected, checked, disabled), the blaze-html-templates compiler fails silently. The template is never registered, which leads to Cannot find module '../.././imports/ui/.../template.html' and Template.xxx is not defined.

It’s not optimal yet, but little by little it’s getting better :man_shrugging:

2 Likes

Meteor’s llms.txt is massive (700+ sections covering Blaze tutorials, Cordova, Svelte, Vue, every OAuth provider, changelogs, etc.). For most apps you only need a fraction of it.

What worked well for me was creating a curated .md rules file in my project’s

.windsurf/rules/ folder with just the APIs my app actually uses: Collections/async variants, Methods, Pub/Sub, react-meteor-data hooks (useTracker , useSubscribe , useFind ), Accounts, check /Match, DDPRateLimiter, and Meteor.settings . Ended up around 200 lines instead of the full doc.

The key things worth highlighting for Meteor 3.x specifically:

  • The *Async requirement for all server-side collection operations
  • callAsync on the client instead of call
  • The useSubscribe /useFind pattern from react-meteor-data

The full llms-full.txt is great as a reference you can point the AI to on-demand (like Windsurf’s read_url_content ), but loading it into context on every request adds significant token overhead for content that’s usually irrelevant to what you’re working on.

1 Like

A skill md would cover most things. The Hono skill, especially with a CLI, works really well. A good CLI is often much better than an MCP server.

1 Like

Great roundup @italojs — a lot of these threads clicked together for me too.

I’ve been working on something that directly addresses several of the points raised here, especially around making Meteor servers MCP-capable. It’s called meteor-wormhole, and it does exactly what a few people in this thread have been asking for: it turns your existing Meteor app into an MCP server, automatically.

What it does

meteor-wormhole is a Meteor 3 package that exposes your Meteor.methods as MCP tools — no extra server, no manual wiring. You add the package, call Wormhole.init(), and AI agents can immediately discover and call your methods via the standard MCP protocol.


import { Wormhole } from 'meteor/wreiske:meteor-wormhole';

Wormhole.init({ mode: 'all', path: '/mcp' });

That’s it. Every method you define with Meteor.methods is now a tool that Claude, GPT, or any MCP-compatible agent can find and invoke.

How it connects to this discussion

On the “MCP server by default” proposal — This is the core idea behind wormhole. It hooks into Meteor.methods at registration time and automatically exposes them. Internal Meteor methods (login, logout, DDP internals, etc.) are excluded by default so it’s safe out of the box. You can also run it in opt-in mode if you want fine-grained control over what gets exposed.

On the streaming gap — The MCP bridge uses streamable HTTP transport (not WebSocket), with proper session management. It embeds directly into Meteor’s WebApp layer, so it respects your existing deployment setup. This gives AI agents a clean, stateless way to interact with your app.

On the WebMCP angle — Since wormhole exposes a standard MCP endpoint at /mcp, it would be compatible with Chrome’s WebMCP once that lands in browsers. Your Meteor methods would become discoverable by in-browser AI agents without any additional work.

On security — You can optionally require an API key (Bearer token) for the MCP endpoint, and in opt-in mode you declare exactly which methods are available with descriptions and JSON Schema input validation. Schemas are validated via Zod under the hood so AI agents get proper type contracts.

What it looks like from the AI side

An agent connects to http://localhost:3000/mcp, calls listTools(), and sees your methods with descriptions and parameter schemas. It calls callTool('createTodo', { title: 'Buy milk' }) and gets back the result as structured content. Your existing method code doesn’t change at all.

Where it’s at

The package works today on Meteor 3.4+. It uses the official @modelcontextprotocol/sdk, supports both all-in and opt-in exposure modes, and has a full test suite. The repo is at github.com/wreiske/meteor-wormhole.

Also check the deployed site here: https://wormhole.meteorapp.com/

I think this sits nicely alongside the other work happening — @dchafiz’s meteor-mcp for development-time AI assistance and @nachocodoner’s AGENTS.md for codebase understanding. Wormhole is the runtime piece: it makes your running Meteor app something AI agents can actually interact with.

1 Like

From my experience, agent assisted development works best when you loop between developing and validating fast.

Validating consist of, at the very minimum, type checking and automated testing.

I feel like meteor lacks in both areas.

Typescript support is poor:

  1. the official meteor API and the generated types are often outdated.
  2. The officially recommended tooling for creating strong type inference between server and client in the context of meteor methods and pub/sub are lacking. The moment you have to cast something or declare an ambient type for the whole app, you have fundamentally failed at type inference.

If LLM agent cannot validate its output by running type checking, it’s more error prone.

The automated testing I’m less confident about as I’ve only done unit and e2e testing without meteormocha, mainly with jest and playwright (where I’m actually running the local meteor app, whose startup is more than 10 seconds, which isn’t great).

I feel like there’s a lot to improve here for the community, like standard tooling, tutorials, examples.

4 Likes

Hey @wreiske maybe we could join forces in this mcp effort!

I’ve made a small package a few days ago to test a thing that has been annoying me a while in meteor, the lack of discoverability in methods/publications – when joining a project you have to walk around the project to know which methods you can call and which context you could call them.

What if we had something that helped those engineers get onboarded and also helped documenting our code + validate it.

I introduce you meteor-discovery

At the moment is in a very experimental stage but what it does:

It exposes a single endpoint at /discovery that returns a JSON object containing the names of all the Meteor methods and their corresponding parameter names.

To add this discovery page you must add a createDiscoveryPage in your startup, in the same way is described here in the package docs

With that in place you can start documenting your methods, they will appear in the /discovery page as you tag them.

To document methods:


Meteor.methods({
  "links.insert": function () {
    this.addDescription("Inserts a new link with the given title and url");
    const { title, url } = this.validate({
      title: String,
      url: Match.Maybe(String)
    });
    // ... rest of the method implementation
  }
});

This will pop in the discovery as:

{
  "methods": [ 
    {
       "methodName": "links.insert",
       "description": "Inserts a new link with the given title and url",
       "schema": { 
         "title": "String",
         "url": "Maybe(String)",
    }
  ]
}