Meteor secure file upload/download with S3

Hi, I’m working on a Meteor app, and would like to securely upload/download files with authentication backed by Amazon S3.

Slingshot seems standard to do authorized uploads. Ok.

I’m looking for a way to secure the downloads. A secure hash in the URL would be technically secure, but users may not be happy with their data on a ‘public’ link that never expires.

One idea is to use HTTP. Meteor Webapp and Picker don’t seem to support authentication.

This server-side package could be used to make download URLs like this:

https://example.com/file/12356.pdf?_userId=oPwEZMPDWmdoSCQSi&_token=m72JpzQeQSeNs9cTx

Which could stream the file or redirect to a pre-signed aws link for the currently signed in user.

Another idea is to stream the file over DDP via a Meteor method, then tell the browser to download it. This makes previewing the file harder.

Another idea is to send presigned AWS links over DDP via a Meteor method.

Another idea would be to add presigned links by transforming the document with Cursor.observe() in Meteor.publishComposite function. It looked like this could handle async. It would be slow, but it’s only to publish one document.

Hmmm… Each of these seems like a long way to go - so I wrote this.

How did you do it? Is there something I missed? What’s the simplest thing that could possibly work?

Thanks in advance!

Mike

I’d like to know how to download files from S3 myself. I looked into it a bit but found no good sources.

1 Like

Have you tried Meteor Files yet? It has everything you’ll ever need from secure uploads, streaming to easy downloads. You can test its functionalities here.

5 Likes

Wow, thank you @martineboh, great resource here!

1 Like

@martineboh I looked at Meteor Files, and it was heavy for my needs (DDP/WebRTC uploads). It looked like Slingshot could do the same job with less code.

Also, I wanted to store the file metadata in my other collections instead of the FilesCollection. Here’s an interesting discussion about Meteor File’s next version.

But thanks for the recomendation! Maybe I missed something re: securing downloads. That’s the part I’m trying to find the best strategy for.

2 Likes

I’m currently using slingshot for uploads, did you find a way to do downloads with it too?

1 Like

I did it using signed urls. Pretty simple:

import { AWS } from 'meteor/peerlibrary:aws-sdk';

AWS.config.update({
  region: Meteor.settings.AWSRegion,
  accessKeyId: Meteor.settings.AWSAccessKeyId,
  secretAccessKey: Meteor.settings.AWSSecretAccessKey,
})

const url = s3.getSignedUrlSync('getObject', {
    Bucket: `${bucket}`,
    Key: `${key}`,
    Expires: 90 // seconds
});
5 Likes

Thanks, I’ll check it out.

So with this, users will only be able to view S3 urls if they’re logged into the app (and the correct user)?

Personally I recommend Cloudinary, it has a Meteor plugin and is super easy and SUPER useful for image and media hosting.

Don’t use Meteor files on Galaxy, they don’t give you space (/tmp 500mb, resets every update)

I meant more along the lines of securing the files on s3. I already have s3 setup.

The easiest thing to do is just lockdown uploads, then set it so anybody can view the URLs (set it to “public”).

I’m wondering how you make them private, so that somebody has to be logged in to view the image at the given URL. I’m guessing there is some kind of token system.

An example use case (not mine) would be a medical platform where you are uploading sensitive PDFs. You only want people logged in to see them, but you want to host them on S3 like any other media. You could even take that a step further and only let people see PDFs on s3 that match their specific doctors office, or only let them see their own medical records, etc.

Is this easy/possible with slingshot? I’ve never actually looked into it.

2 Likes

Just letting you know that you can store images in a database, and have a file serve that database AS a file.

So like mysite.com/file/hello-goodbye.png

It then fetches that data from your DB stored in MongoDB, and spits it out as a mime type.

Cheers!

2 Likes

This can be used to achieve this. If you read about signed urls, you should be able to make it work

Hi martineboh,

I am trying to use Meter Files. I want to store my files on S3 bucket.

Problem which I am facing is : I am able to store video on directory, later it seams that it is uploading the file on S3 (which is not working) as it is not getting any error it is unlinking the video. So it deletes the video from the directory. This way I am loosing the data. Following is my code:

Server/videoupload.js

import { Meteor } from ‘meteor/meteor’;
import { _ } from ‘meteor/underscore’;
import { Random } from ‘meteor/random’;
import { FilesCollection } from ‘meteor/ostrio:files’;
import stream from ‘stream’;

import S3 from ‘aws-sdk/clients/s3’; // http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
// See fs-extra and graceful-fs NPM packages
// For better i/o performance
import fs from ‘fs’;

process.env.S3=’{“s3”:{“key”: “xxx”, “secret”: “xxx”, “bucket”: “xxx”, “region”: “xxx”"}}’

if (process.env.S3) {
Meteor.settings.s3 = JSON.parse(process.env.S3).s3;
}
// console.log(Meteor.settings.s3);
const s3Conf = Meteor.settings.s3 || {};
const bound = Meteor.bindEnvironment((callback) => {
return callback();
});

// Check settings existence in Meteor.settings
// This is the best practice for app security
if (s3Conf && s3Conf.key && s3Conf.secret && s3Conf.bucket && s3Conf.region) {
// Create a new S3 object
//s3Conf.secret,
//s3Conf.key,
//s3Conf.region,
const s3 = new S3({
secretAccessKey : s3Conf.secret,
accessKeyId : s3Conf.key,
region : s3Conf.region,
// sslEnabled: true, // optional
httpOptions: {
timeout : 60000,
agent : false
}
});

// console.log('s3: ', s3);
// Declare the Meteor file collection on the Server
export const BizVideo = new FilesCollection({
    debug: true, // Change to `true` for debugging
    storagePath: 'bizVideos',
    collectionName: 'bussVideo',
    // Disallow Client to execute remove, use the Meteor.method
    allowClientCode: false,
    chunkSize: 1024 * 1024,

    // Start moving files to AWS:S3
    // after fully received by the Meteor server
    onAfterUpload(fileRef) {
        // Run through each of the uploaded file
        // console.log("fileRef2: ", fileRef);
        _.each(fileRef.versions, (vRef, version) => {
            // We use Random.id() instead of real file's _id
            // to secure files from reverse engineering on the AWS client
            const filePath = 'files/' + (Random.id()) + '-' + version + '.' + fileRef.extension;
            console.log("filePath: ", filePath);

            // Create the AWS:S3 object.
            // Feel free to change the storage class from, see the documentation,
            // `STANDARD_IA` is the best deal for low access files.
            // Key is the file name we are creating on AWS:S3, so it will be like files/XXXXXXXXXXXXXXXXX-original.XXXX
            // Body is the file stream we are sending to AWS
            s3.putObject({
                // ServerSideEncryption: 'AES256', // Optional
                StorageClass : 'STANDARD_IA',
                Bucket       : s3Conf.bucket,         //s3Conf.bucket,
                Key          : filePath,
                Body         : fs.createReadStream(vRef.path),
                ContentType  : vRef.type,
            }, (error) => {
                // console.log("error: ", error);
                bound(() => {
                    if (error) {
                        console.error(error);
                    } else {
                        // Update FilesCollection with link to the file at AWS
                        const upd = { $set: {} };
                        upd['$set']['versions.' + version + '.meta.pipePath'] = filePath;
                        console.log("upd: ", upd);

                        this.collection.update({
                            _id: fileRef._id
                        }, upd, (updError) => {
                            if (updError) {
                                // console.log("updError: ", updError);
                                console.error(updError);
                            } else {
                                // Unlink original files from FS after successful upload to AWS:S3
                                console.log("unlink: ", fileRef._id);
                                this.unlink(this.collection.findOne(fileRef._id), version);
                            }
                        });
                    }
                });
            });
        });
    },


    // Intercept access to the file
    // And redirect request to AWS:S3
    interceptDownload(http, fileRef, version) {
        console.log('interceptDownload');
        let path;

        if (fileRef && fileRef.versions && fileRef.versions[version] && fileRef.versions[version].meta && fileRef.versions[version].meta.pipePath) {
            path = fileRef.versions[version].meta.pipePath;
        }

        if (path) {
            console.log('path ',path);
            // If file is successfully moved to AWS:S3
            // We will pipe request to AWS:S3
            // So, original link will stay always secure

            // To force ?play and ?download parameters
            // and to keep original file name, content-type,
            // content-disposition, chunked "streaming" and cache-control
            // we're using low-level .serve() method
            const opts = {
                Bucket: s3Conf.bucket,
                Key: path
            };

            if (http.request.headers.range) {
                const vRef  = fileRef.versions[version];
                let range   = _.clone(http.request.headers.range);
                const array = range.split(/bytes=([0-9]*)-([0-9]*)/);
                const start = parseInt(array[1]);
                let end = parseInt(array[2]);
                if (isNaN(end)) {
                    // Request data from AWS:S3 by small chunks
                    end = (start + this.chunkSize) - 1;
                    if (end >= vRef.size) {
                        end = vRef.size - 1;
                    }
                }
                opts.Range = `bytes=${start}-${end}`;
                http.request.headers.range = `bytes=${start}-${end}`;
            }

            const fileColl = this;
            s3.getObject(opts, function(error) {
                if (error) {
                    console.error(error);
                    if (!http.response.finished) {
                        http.response.end();
                    }
                } else {
                    console.log(getObject);
                    if (http.request.headers.range && this.httpResponse.headers['content-range']) {
                        // Set proper range header in according to what is returned from AWS:S3
                        http.request.headers.range = this.httpResponse.headers['content-range'].split('/')[0].replace('bytes ', 'bytes=');
                    }

                    const dataStream = new stream.PassThrough();
                    fileColl.serve(http, fileRef, fileRef.versions[version], version, dataStream);
                    dataStream.end(this.data.Body);
                }
            });

            return true;
        }
        // While file is not yet uploaded to AWS:S3
        // It will be served file from FS
        return false;
    }
});

// Intercept FilesCollection's remove method to remove file from AWS:S3
const _origRemove = BizVideo.remove;
BizVideo.remove = function(search) {
    const cursor = this.collection.find(search);
    cursor.forEach((fileRef) => {
        _.each(fileRef.versions, (vRef) => {
            if (vRef && vRef.meta && vRef.meta.pipePath) {
                // Remove the object from AWS:S3 first, then we will call the original FilesCollection remove
                s3.deleteObject({
                    Bucket: s3Conf.bucket,
                    Key: vRef.meta.pipePath,
                }, (error) => {
                    bound(() => {
                        if (error) {
                            console.error(error);
                        }
                    });
                });
            }
        });
    });

    //remove original file from database
    _origRemove.call(this, search);
};

} else {
throw new Meteor.Error(401, ‘Missing Meteor file settings’);
}

client/videoupload.js

import { Meteor } from ‘meteor/meteor’;
import { FilesCollection } from ‘meteor/ostrio:files’;

export const BizVideo = new FilesCollection({
collectionName: ‘bussVideo’,
allowClientCode: false,
chunkSize: 1024 * 1024
});

file where I have implemented it:

import { ReactiveVar } from ‘meteor/reactive-var’;
import { Bert } from ‘meteor/themeteorchef:bert’;
import { BizVideo } from ‘/imports/videoUploadClient/videoUpload.js’;

var uploader = new ReactiveVar();
Template.vendorImagesVideos.onCreated(function() {
this.currentUpload = new ReactiveVar(false);
this.subscribe(‘getBizVideo’);
});

Template.vendorImagesVideos.helpers({
currentUpload: function() {
return Template.instance().currentUpload.get();
},

files: function() {
	var businessLink = FlowRouter.getParam('businessLink');
	var bussData = Business.findOne({"businessLink":businessLink});
	if(bussData){
        var data = BizVideo.find({"_id":bussData.businessVideo}).fetch();
        return data;
    }
},	

}

Template.vendorImagesVideos.events({
‘change #fileInput’(e, template) {
if (e.currentTarget.files && e.currentTarget.files[0]) {
var businessLink = FlowRouter.getParam(‘businessLink’);
var bussData = Business.findOne({“businessLink”:businessLink});
if(bussData.businessVideo){
Bert.alert(‘Only One can be upload’,‘danger’,‘growl-top-right’);
}else{

	      // We upload only one file, in case
	      // multiple files were selected
	      const upload = BizVideo.insert({
	        file: e.currentTarget.files[0],
	        streams: 'dynamic',
	        chunkSize: 'dynamic'
	      }, false);

	      upload.on('start', function () {
	        template.currentUpload.set(this);
	      });

	      upload.on('end', function (error, fileObj) {
	        if (error) {
	          alert('Error during upload: ' + error);
	        } else {
	          alert('File "' + fileObj._id + '" successfully uploaded');
	          	Meteor.call("updateVendorBulkVideo", businessLink,fileObj._id,
		          function(error, result) { 
		              if(error) {
		                  console.log ('Error Message: ' +error ); 
		              }else{
							  // process.exit();
		              }
		        });
	        }
	        template.currentUpload.set(false);
	      });

	      upload.start();
	    }
    }
},

}

Server/main.js

import { BizVideo } from ‘/imports/videoUploadserver/videoUpload.js’;
Meteor.publish(‘getBizVideo’, function() {
return BizVideo.find().cursor;
});

I am able to see the video till the time it is in director.
Please let me know where I am going wrong. Thanks in advance.