Cursor IDE rules

So I’m starting to test Cursor with Meteor. I’m curious how are others doing with this or other AI tools. I’m looking to see if if it can help me speed up building features and go beyond just simple in-line code suggestions.

I think we can collaborate here a bit to share and develop rules for the AI that will make it work better with Meteor. Maybe later down the line we could upgrade Mantra to provide guidance to the AI on how to create in more complex Meteor projects.

Thoughts? Suggestions for better AI IDE?

1 Like

The M3 docs AI has been very helpful. Perhaps there’s a way to integrate it into Cursor.

I asked if there was an api for that when it came out, but there isn’t. Unless that’s changed there’s no way to integrate it with anything.

Was it trained on a set of data provided by Meteor? If so then maybe we could use that data set to train an AI that integrates with cursor.

That’s not how this works. You don’t train the LLM on that specialized knowledge. Instead you give the LLM a way to simply look up the needed Information in the docs. That way you can easily switch out the LLM and also update the documentation without having to go through this training all over again. Besides it’s never a good Idea to fully rely on the knowledge the LLM got through training.

1 Like

To get back to Jan’s original post: I have tried TabNine, GitHub Copilot and continue.dev so far, and continue.dev is the one I am using atm (using Codestral for tab completion and Claude 3.5 Sonnet for the chat).

The ‘simple’ in-line code suggestions are by far the most important part of AI Coding assistance. If it works well it feels like your computer can read your mind and put down several lines of code exactly like you would have done yourself. At other times it still feels like I can read your mind but is being an asshole on purpose.

One major pain point is: LLMs can’t really ‘edit’ your code. They can write code top to bottom but they can’t just look at your code and then only change it in 1-2 places. The coding assistants have functionality to do inserts or edits, but that is still quite error prone.

Over all: You got to learn (usually the hard way) what kind of errors your coding assistant likes to make and then watch out for that. And in some cases (when it’s being a stubborn bastard) it’s best to just turn the assistance off for a couple of minutes and do the edits yourself. You got to learn how to let it do the stuff that it’s good at but not to screw things up for you.

1 Like

My experience so far is that if you are outside the Tailwind/shadcdn UI or any other popular ecosystems it fails miserably and will add stuff as if you are in those ecosystems (even though you are not and you tell it explicitly).

So right now my experience is that, if you want it to do something beyond simple things, it is usable only with the most popular tools. Once you have something of your own or use Meteor it fails.
Maybe I can fine tune things in the next few days. Maybe having some form of the Meteor documentation that can be added to context might help, but I kind of feel that the above observation could be potentially extrapolated to a wider point about AI future or current state.

So far, autocomplete is just unreliable. It annoys as often as it helps, in practical sense, it would probably make sense to only enable it when you begin writing file/function, but not any other scenario.

Multishot conversation is often helpful, but extremely unreliable even across release cycles of most popular frameworks ever, for instance its utterly miserable at navigating between page and app router docs of NextJS, despite latter being the thing for 2 years.

My process for actually getting code from LLM normally involves just scraping docs of framework at hand into embedded knowledge base (because if you dont embed, it gets confused and doesnt use full docs), then providing few of pre-existing files as examples.

Best thing you can do for ai-stuff is provide documentation in markdown format that is easy to grab. Ideally there could be a separate way to parse docs specifically to focus LLM attention on specific parts.

Highest skill to master is to ditch attempts to prompt LLM before you overcommit to making it work, I guess in this regard, with blazing fast complie time you could feed artifact back to LLM to evaluate itself.

One interesting thing about using LLMs through API is that most users arent aware that API outputs of LLMs aren’t fed with conversational prompt from provider, so are worse out of the box for understanding user requests, but anyway, the thing I noticed about Meteor in particular is what I REALLY need help with is organization of my data/collection requests. And what I want is for it to implement a concept in my head. I noticed, that whenever LLM is instructed to only evaluate(censor) prompt regarding technical implementation of functionality, it produces much better results of what I need from Meteor. I can then later prompt for lavish React component and add Meteor hooks as I see fit.

1 Like

Aside from the AI features, how is the Cursor IDE for use with Meteor?

Its just Visual Studio Code otherwise. Cursor provides no benefits over just using VS-Code besides seamless ai plugin integration. Technically there’re plugins on par with Cursor for VS code and open source right now.

1 Like

related : I am currently trying to use Devin.AI to speed up the process for Meteor v2 → v3 upgrade.

status: we have a large repo… so this has been kinda painful. I may do a blog in the experience… but about to head of on vacation :slight_smile: One major challenge with a large repo is the AI chokes… so in process of trying to get it to split up any large files into smaller so I can do smaller “sessions”.

Interesting article on training LLMs to be better at working with a code base:

1 Like

In my case I used WebStorm with its integrated AI tool, so, I was able to refactor my Meteor 2 app (especially to find and change the mongo methods with Async suffix and adding missing async/await statements), and it worked very well, it codes directly in your files with the “Generate code” option you have to select the code that you want to refactor, and it replaces it with the new code. I think nowadays there’s a community version of Webstorm but the AI tool is still paid version. I used the trial version (7 days) to do my work xD.

2 Likes

I’ve changed my programming today and I will try to finish my upgrade of Mantra and get AI to work with it:

I’ve been testing AI integrations with IDEs.

I usually use Webstorm, but its AI assistant is still lacking compared to Cursor’s.

Although I’m not a fan of VScode, I’ve been using Cursor daily for the past three months and only open Webstorm for tasks where Cursor falls short.

The composer feature is fantastic and provides an excellent starting point for all tasks.

Typically, I provide a clear description and code examples (tagging existing files or folders), and it handles 80% or more of the work flawlessly. I then review and tweak anything it couldn’t complete correctly.

Overall, working with Cursor and Meteor 3 has been a great experience.

4 Likes

Recently started playing with Windsurf, will try to produce something for that.

Hey everyone,

This is my .cursorrules file. I’m not super-happy with the results but cursor is really helping and increases my dx.

# Project: Meteor.js version 2.8 and Blaze Application

## Coding Standards
- Use JavaScript for coding.
- Use es6 syntax.

## Style Guidelines
- Follow Airbnb React/JSX Style Guide
- Use 2 spaces for indentation

## Best Practices
- The code should follow cleand code principles and be easy to understand and maintain.
- Implement responsive designs
- Prefer async/await for asynchronous operations

## Testing
- Write testable codes
- Write unit tests for all new components and utilities
- Aim for at least 80% test coverage

## Documentation
- Use JSDoc comments for all functions and components
- Keep README.md up-to-date with project setup and contribution guidelines 

I wonder what others are using. Maybe we can find an optimum rule set for all.

1 Like

A lot of good comments here on this ‘AI’ roadblock/choke-point in coding.

I am going for an approach which is less “IDE Trip” and more putting ‘AI’ in its right place ( not necessarily in the editor ) … was never a huge fan of collaborative coding either, pair programming, etc.

On the way there…

Give Zed a try ( and perhaps Aider at least to be aware of them )

Zed in particular is changing my approach in many ways, because it is a lot more than an IDE only, but it is geared toward people who miss vim … like me. But that should not make you think ‘CLI’ or ‘minimalist’ per se. It is just the right tool for the time and place. Like with vim where here we are in a CLI all of a sudden ( in the grand scheme of history ) … now what? What even is this “digital context” in my day?

I am as likely to pull Zed out because it is such a great terminal too, which I cannot think of saying about any IDE. I always wanted to be able to move tabs/windows around which might be code, terminal, now LLM chats… all first-class without being locked into a console drawer, etc. It just behaves how you wish someone would have done it finally for goodness sake. And, yes… interfaces with all the LLMs.

Awesome community. The Zed discord is full of :ninja: and you will get a great glimpse at the entire field.

Written in Rust by the creators of Atom ( which Electron was extracted from )

Also the Zed approach to prompt libraries is going somewhere good. Reminds me of why fabric was initially appealing until it was too limited and too meh to keep. Especially with the telemetry vibe ( likely unfounded, but the feeling is there ) … it is just an ocean of excellent prompts, and inspiration to write great prompts. Once that thinking was added to the Zed prompt library approach, I find myself using Zed with transcripts and trying to coax out different takes on the outline, summaries, repurposing of talks, etc.

Put it this way… I got my wife using Zed for writing Markdown. She is serious about flow and focus in writing… loves Zed… it is more of a generally awesome tool, less of a cheap trick… or black box.

The fact that everyone is going in one direction en mass ( we call this ‘herd’ dynamics ) is not a good sign. Zed is more of an exploration of WTF are we even doing right now? I can definitely see them pulling another Electron extraction out of this… but this time, not as a hack… as a mature “ah ha” moment to help with the WTF factor. As the individual sits at a ‘computer’ … what is even going on between keyboard and chair?

Their model chat integration is coming along, with a different approach. And works great with Ollama … as well as the data-center LLMs. Based around collaborative tools but like I said I will never use those. But most people in F/OSS seem like they would love that.

Biggest issue overall in ‘AI’ ( especially local ) is probably context length, and then as was pointed out, how to properly guide and focus LLMs no matter what IDE. But that comes back to the beginning… we are just getting started with including ‘AI’ in code, and that seems like it revealed that we don’t quite know what we are doing yet, or are not very conscious about it, overall.

Perhaps there are some individuals with great approaches to code, but the field overall has no direction. Everyone being in a handful of editors which alone possess a usable workflow means we are stuck.


Zed is on a weekly update cycle or so right now, with stable releases very fast. I have seen a lot of community engagement and ideas go into code rapidly, and they are forward looking. Every time something is brought up, their core team probably already roadmapped it and are trying to massage functionality into place versus just Frankenstein something together like with the current situation in the ‘IDE’ experience.

No black box. No niches. No being forced in a certain herd direction. Actually a better all-around tool.

1 Like

Also a Zed user and agree it’s very nice. I haven’t used its built-in AI features though.

Would be interested to hear more about your approach. I will spin up ChatGPT or Claude on bite-sized problems, code samples, and questions about different systems / technologies.

I haven’t gone full hog with vibe coding. I suppose with big enough context windows, maybe it really is the future. Though, how do you make a non-determistic system avoid hallucinating / bullshitting? We’ll see.

It’s interesting that with all the hype that AI gets, it still bullshits on simple things (even outside coding). It reminds me of the Gell-Mann amnesia effect.

That’s :100: and completes the picture for me now, in explaining LLMs to regular people.


I am in the process of extracting my approach from my post-Meteor implementation of a multi-client/server/multi-agent design. Trying to dogfood as much as possible and build the tool with the tool, somewhat, except for the first steps where there is no tool; as an exercise ( in patience ) to make sure I really understand each piece.

Once I come through that I expect to outline the prompts in the Zed library leading up to the switch out of an IDE, how the workflow goes post-IDE, etc… and listing the exact Ollama models used, etc… all reproducible and then comparable with different models… perhaps even always re-evaluating LLMs as new ones surface. Ideally I want to get to where there is a competition for the best work, then ranking.

If you are interested @jam, without speeding things up on my end since this is trying to be an extraction and not become its own project until after my actual objective done with the system finished first ( :sweat_smile: ) … I would definitely open up the approach to you for testing it out. Seems to meet a real/desperate need out here.

In the Zed community especially, it would fit perfectly, since their approach is probably not going to work versus the Cursor and Continue and Cline and RooCode and other ‘communities’ ( mostly just Visual Studio Code extensions ) at least enough to pull over the vibe coders ( which might be for the best ) … I am going more for IDE-agnostic 100% and let coders be sane/free. Starting with me!


For the record I started out just using Open Web UI with local LLMs and then naturally moved over to Zed for the majority of cases, using that chat interface to the same Ollama server. By the time this system is extracted, it will still be all three for different things, since sometimes it is great to have each one for different reasons… but the bulk of the “real work” does not fit into Zed nor Open Web UI yet, hence the fork in the road for me. In talking about this in various IDE ecosystems most coders are just like :interrobang: and all just facing the same :person_facepalming: