Cursor IDE rules

So I’m starting to test Cursor with Meteor. I’m curious how are others doing with this or other AI tools. I’m looking to see if if it can help me speed up building features and go beyond just simple in-line code suggestions.

I think we can collaborate here a bit to share and develop rules for the AI that will make it work better with Meteor. Maybe later down the line we could upgrade Mantra to provide guidance to the AI on how to create in more complex Meteor projects.

Thoughts? Suggestions for better AI IDE?

1 Like

The M3 docs AI has been very helpful. Perhaps there’s a way to integrate it into Cursor.

I asked if there was an api for that when it came out, but there isn’t. Unless that’s changed there’s no way to integrate it with anything.

Was it trained on a set of data provided by Meteor? If so then maybe we could use that data set to train an AI that integrates with cursor.

That’s not how this works. You don’t train the LLM on that specialized knowledge. Instead you give the LLM a way to simply look up the needed Information in the docs. That way you can easily switch out the LLM and also update the documentation without having to go through this training all over again. Besides it’s never a good Idea to fully rely on the knowledge the LLM got through training.

1 Like

To get back to Jan’s original post: I have tried TabNine, GitHub Copilot and continue.dev so far, and continue.dev is the one I am using atm (using Codestral for tab completion and Claude 3.5 Sonnet for the chat).

The ‘simple’ in-line code suggestions are by far the most important part of AI Coding assistance. If it works well it feels like your computer can read your mind and put down several lines of code exactly like you would have done yourself. At other times it still feels like I can read your mind but is being an asshole on purpose.

One major pain point is: LLMs can’t really ‘edit’ your code. They can write code top to bottom but they can’t just look at your code and then only change it in 1-2 places. The coding assistants have functionality to do inserts or edits, but that is still quite error prone.

Over all: You got to learn (usually the hard way) what kind of errors your coding assistant likes to make and then watch out for that. And in some cases (when it’s being a stubborn bastard) it’s best to just turn the assistance off for a couple of minutes and do the edits yourself. You got to learn how to let it do the stuff that it’s good at but not to screw things up for you.

1 Like

My experience so far is that if you are outside the Tailwind/shadcdn UI or any other popular ecosystems it fails miserably and will add stuff as if you are in those ecosystems (even though you are not and you tell it explicitly).

So right now my experience is that, if you want it to do something beyond simple things, it is usable only with the most popular tools. Once you have something of your own or use Meteor it fails.
Maybe I can fine tune things in the next few days. Maybe having some form of the Meteor documentation that can be added to context might help, but I kind of feel that the above observation could be potentially extrapolated to a wider point about AI future or current state.

So far, autocomplete is just unreliable. It annoys as often as it helps, in practical sense, it would probably make sense to only enable it when you begin writing file/function, but not any other scenario.

Multishot conversation is often helpful, but extremely unreliable even across release cycles of most popular frameworks ever, for instance its utterly miserable at navigating between page and app router docs of NextJS, despite latter being the thing for 2 years.

My process for actually getting code from LLM normally involves just scraping docs of framework at hand into embedded knowledge base (because if you dont embed, it gets confused and doesnt use full docs), then providing few of pre-existing files as examples.

Best thing you can do for ai-stuff is provide documentation in markdown format that is easy to grab. Ideally there could be a separate way to parse docs specifically to focus LLM attention on specific parts.

Highest skill to master is to ditch attempts to prompt LLM before you overcommit to making it work, I guess in this regard, with blazing fast complie time you could feed artifact back to LLM to evaluate itself.

One interesting thing about using LLMs through API is that most users arent aware that API outputs of LLMs aren’t fed with conversational prompt from provider, so are worse out of the box for understanding user requests, but anyway, the thing I noticed about Meteor in particular is what I REALLY need help with is organization of my data/collection requests. And what I want is for it to implement a concept in my head. I noticed, that whenever LLM is instructed to only evaluate(censor) prompt regarding technical implementation of functionality, it produces much better results of what I need from Meteor. I can then later prompt for lavish React component and add Meteor hooks as I see fit.

1 Like

Aside from the AI features, how is the Cursor IDE for use with Meteor?

Its just Visual Studio Code otherwise. Cursor provides no benefits over just using VS-Code besides seamless ai plugin integration. Technically there’re plugins on par with Cursor for VS code and open source right now.

1 Like

related : I am currently trying to use Devin.AI to speed up the process for Meteor v2 → v3 upgrade.

status: we have a large repo… so this has been kinda painful. I may do a blog in the experience… but about to head of on vacation :slight_smile: One major challenge with a large repo is the AI chokes… so in process of trying to get it to split up any large files into smaller so I can do smaller “sessions”.

Interesting article on training LLMs to be better at working with a code base:

1 Like

In my case I used WebStorm with its integrated AI tool, so, I was able to refactor my Meteor 2 app (especially to find and change the mongo methods with Async suffix and adding missing async/await statements), and it worked very well, it codes directly in your files with the “Generate code” option you have to select the code that you want to refactor, and it replaces it with the new code. I think nowadays there’s a community version of Webstorm but the AI tool is still paid version. I used the trial version (7 days) to do my work xD.

2 Likes

I’ve changed my programming today and I will try to finish my upgrade of Mantra and get AI to work with it:

I’ve been testing AI integrations with IDEs.

I usually use Webstorm, but its AI assistant is still lacking compared to Cursor’s.

Although I’m not a fan of VScode, I’ve been using Cursor daily for the past three months and only open Webstorm for tasks where Cursor falls short.

The composer feature is fantastic and provides an excellent starting point for all tasks.

Typically, I provide a clear description and code examples (tagging existing files or folders), and it handles 80% or more of the work flawlessly. I then review and tweak anything it couldn’t complete correctly.

Overall, working with Cursor and Meteor 3 has been a great experience.

4 Likes