Meteor as an Image fileserver - How to disable auto-restart on public?

I’m trying to get a setup running where one Meteor instance can send binary image data to another Meteor instance, which writes that data to an image file in /public/uploads and then serves that image for subsequent requests.

The issue I’ve run into is that Meteor restarts itself everytime it detects the /public/ directory has changed. Can I safely disable this functionality without also breaking its ability to serve the new image files?

2 Likes

Sounds like you’re running the app in dev mode.

When you bundle and deploy your app, meteor won’t host files added after bundling from the public directory. (And won’t reload due to changes there either)

I’d suggest:

  1. Hosting the files from NGINX (Or web server of your choice)
  2. And/Or, storing them in an S3 Bucket

Here’s an example config that does both: https://gist.github.com/nathan-muir/b33149515c00b620c6e7 - it proxies requests through to the Meteor instance running on the same server.


Another alternative is to store the images in the database https://atmospherejs.com/?q=collectionfs

2 Likes

Oh, wow, thanks for telling me. You just saved me a bunch of time when I got around to testing production deployment!

S3 won’t work for me, unfortunately, but a local nginx sounds like a great idea. Or I could run this under IISNode and use IIS if I’m on windows.

Yep, just replace the /user-content/ directive with something like

  location /my-hosted-images/ {
    root /path/to/my-hosted-images/;
    expires max;
    add_header Pragma public;
    add_header Cache-Control "public";
  }

and you’ll be good-to-go.

Hey Nathan! What in ur opinion would be the best method of storing images for a project that has gigs of it. Images are constantly being uploaded by the users.
I suppose it would not be a good idea to store them in the DB, assuming it’s size…

@avalanche1 Any cloud storage system would work well (from Amazon, Rackspace or Google).

There are huge advantages for not storing data locally on your server. And, to get the same architecture benefits without a cloud service, you’d probably end up building a complicated storage API… so why not use one that already exists.

The only downside is the complexity of understanding the API’s / designing your app around URL signing / “fetching” resources instead of just grabbing them locally.

1 Like

I personally have found cloudinary to be a very nice and easy way to allow image uploads. They have a decent free tier as well so its costs effective when you are just getting things up and running.

1 Like

Wow, that’s huge! Thanks, mate)

Personnally i store images outside the meteor project (or directly in db) and i create a server route to access them from client.
Advantages : no need to know how works nginx, and it’s quick to setup when you are a developper.
Disavantages : each call on those routes will require nodeJs resources, so it takes place in its event Loop, whereas it’s really useless. There is no need to use nodeJs here…

I did this because i’m not good on system operation, but 2 years ago the community was not so huge :wink: and i needed to go on quickly

@nathan_muir What are the advantages of not storing data locally? The APIs that cloud services provide are not required when storing files locally, because fs module already does much more than these APIs can provide. I am not able to see the advantages of storing files on cloud. I’m building an app and I am storing files locally and your post is making me re-evaluate my decisions. It’d be great if you can elaborate on advantages of cloud storage providers vs locally storing the files.

When you’re small, your main concern is backups in the following scenarios:

  • External disaster
    • server destroyed
    • hard-drive failed etc.
  • Bug / Security
  • data maliciously / accidentally wiped
  • backups poisoned or corrupted.

These problems are solved well enough, for most filesystems. (eg, off-site backup cron-jobs)

When you get a bit bigger you have to worry about:

  • Server Uptime / Availability
    • eg, what happens if the machine with that users pictures goes down?
  • How to share data between lots of servers on separate machines?
  • eg. Web server hosts ‘users/images/’, but a separate processing server needs to make thumbnails.

If you build your app around fs commands, you’ll need to start adding these things in later, and developing your own solutions. OR, just accept failure, and notify your users of data-loss or downtime, and restrict how your application scales.

So,


Addendum:

If you’re running your service in the cloud, the idea is that no individual machine should really matter. Think of your servers as “cattle” instead of “pets”.

This causes a few problems for standard storage solutions - if you store you’re files on the local disk, losing the machine matters. You’ve just made a “pet”.

Put them on a specialised storage “service”… and you’ve now got “cattle”.

So it goes - Your machines should contain nothing but the latest copy of your source / configuration.

Aside - this is solved for databases (the other type of on-disk storage), by “streaming replication” or similar features.

Thanks, that makes a lot of sense.