Running llama on Galaxy?

I might be tempted to try this out. Are there likely to be any pitfalls with regard to running it on Galaxy?

This package comes with pre-built binaries for macOS, Linux and Windows.

If binaries are not available for your platform, it’ll fallback to download the latest version of llama.cpp and build it from source with cmake. To disable this behavior set the environment variable NODE_LLAMA_CPP_SKIP_DOWNLOAD to true.

You won’t have Metal or Coda support on galaxy and you will most likely not have enough RAM either. I doubt it will run at all, and if it does it will be really, really slow.

1 Like

Today, Galaxy supports running only Meteor.js apps.

Yes, but this is an npm module that could be run from within a Meteor app. At the same time, I think @janmp’s reply is very likely correct.

1 Like

Here’s another npm package for running LLMs: