When you’re small, your main concern is backups in the following scenarios:
- External disaster
- server destroyed
- hard-drive failed etc.
- Bug / Security
- data maliciously / accidentally wiped
- backups poisoned or corrupted.
These problems are solved well enough, for most filesystems. (eg, off-site backup cron-jobs)
When you get a bit bigger you have to worry about:
- Server Uptime / Availability
- eg, what happens if the machine with that users pictures goes down?
- How to share data between lots of servers on separate machines?
- eg. Web server hosts ‘users/images/’, but a separate processing server needs to make thumbnails.
If you build your app around
fs commands, you’ll need to start adding these things in later, and developing your own solutions. OR, just accept failure, and notify your users of data-loss or downtime, and restrict how your application scales.
If you’re running your service in the cloud, the idea is that no individual machine should really matter. Think of your servers as “cattle” instead of “pets”.
This causes a few problems for standard storage solutions - if you store you’re files on the local disk, losing the machine matters. You’ve just made a “pet”.
Put them on a specialised storage “service”… and you’ve now got “cattle”.
So it goes - Your machines should contain nothing but the latest copy of your source / configuration.
Aside - this is solved for databases (the other type of on-disk storage), by “streaming replication” or similar features.