Large files (1-5GB) CollectionFS Ubuntu 14.04 Nginx

Hello everybody,
does anybody experienced RAM memory issue when uploading and downloading large files at the same time? I am using CollectionFS to upload several files (1-5GB) and when any of them is successfully uploaded another API on different server (Kaltura) will start downloading it from CollectionFS download link. I am using default 2mb chunk and NGINX as forward proxy. I can see timeout errors on NGINX upstream and Ubuntu RAM memory is at max. When I increased timeout settings it was just hanging there. Thanks for any suggestions.
Z.

I’ve never used CollectionFS with files so big, but I would use instead Slingshot (with S3, but that’s just a preference) because with slingshot, the user uploads directly to the cloud without using the meteor process. You still have access control, permissions and so on, but without the overhead of sending to meteor to later send the file to another storage.

Hi Fermuch,
thanks for the idea, direct upload should definitely help. We tried client side chunked file jquery to get files to Kaltura but couldn’t get it working on Meteor. I will look into slingshot or other packages if that’s possible with Kaltura.
Thanks,
Z.

Eventually we resolved the issue by using CollectionFS only for uploads, downloads are handled separately by NGINX which offers them as static files. This setup works nicely even for large files.