Edgee:slingshot and security

I’ve built a Meteor web application and included slingshot to handle certain document uploads. The information that will be uploaded is rather sensitive. How do I encrypt/decrypt the uploaded files and also how do I stop people from accessing the files from S3 except through the application?

Any help appreciated!

1 Like

this is interesting… I was looking for something similar

Encryption: You can encrypt the files at the client, before upload, by using a Javascript encryption. You would have to do this before activating Slingshot to handle the file. So Slingshot just gets the encrypted file and proceeds with that. Then s3 would know nothing about the file at all.

So to be clear: Your Meteor app can store the key to encrypt the file. For example in a database or a specific store for that purpose. So S3 only gets an encrypted file which it cannot encrypt and never will encrypt. You will do all handling by yourself.

S3 also has the ability to encrypt files in a bucket so if you trust AWS with your data they can handle the encryption on their servers. That’s a more simple way. And s3 also support encryption on client side but I have not tested that yet: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

To download: Secure the bucket so only signed urls are allowed. Then:

We use https://github.com/peerlibrary/meteor-aws-sdk to generate a signed url, see for an example:

http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-examples.html

Part: getSignedUrl

You can do this in a method and that way you have a link available on the client which for a certain amount of time allows access to the file.

@lucfranken thanks for the direction. I’ve come across this in my research as well. Probably going to have to use a mix of those.

Yes sure you can mix them. If you use that slingshot you get parts for free which you otherwise would have to implement yourself.

It looks like the default settings of slingshot have public acces to your bucket. Be sure to test that that’s not possible.

@lucfranken there is a method to use temporary keys via the SDK to upload through slingshot.

1 Like

Hey there.

I’ve gotten the SDK to work and am getting successful responses from AWS but still getting an error on upload -

XMLHttpRequest cannot load https://mybucket.s3-eu-west-1.amazonaws.com/. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 403.

Do you still need a CORS or bucket policy when using temporary credentials?

Yes you need both! …

Great, got it working. Thanks for all your assistance!

For anyone looking for the answer - I’ve setup CORS but limited to the domain:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>http://localhost:3000</AllowedOrigin>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

This ensures that you can only upload from this domain. To keep the files private, you have to set the ACL in the Slingshot directive to private:

/* Init STS */

const sts = new AWS.STS(); // Using the AWS SDK to retrieve temporary credentials.

Slingshot.createDirective('documentUploads', Slingshot.S3Storage.TempCredentials, {
bucket: Meteor.settings.AWSBucket,
acl: 'private',
temporaryCredentials: Meteor.wrapAsync(function (expire, callback) {
	//AWS dictates that the minimum duration must be 900 seconds:
	const duration = Math.max(Math.round(expire / 1000), 900);

	sts.getSessionToken({
		DurationSeconds: duration
	}, function (error, result) {
		callback(error, result && result.Credentials);
	});
}),
authorize: function() {
	if (!this.userId) {
		throw new Meteor.Error(403, 'Unauthorised. Please login.');
	}

	return true;
},
key: function(file) {
	const user = Meteor.users.findOne(this.userId);
	return user.username + '/' + new Date().getTime() + '_' + file.name;
}
});

This ensures nobody but you can access the bucket unless they have temporary credentials. To download these uploaded files, you have to get a signed URL from AWS:

 // Client / Blaze template events
'click .docDownload': function(event, instance) {
	/* Prevent default */
	
	event.preventDefault();

	/* Open window */
	
	let win = window.open('');

	/* Get key */
	
	const $this = $(event.target);
	const url = $this.attr('href'); // Holds the full unsigned URL

	/* Call method to get signed url */
	
	Meteor.call('getSignedUrl', url, function(err, result) {
		if (!err) {
			win.location = result;
			win.focus();
		}
	});
    }
3 Likes

@ashrafs thanks for posting what you used to get to it working. out of curiosity, what did you do for the bucket policy so that app users can read and download the file?

Good ol bucket policies…

I’ve been on it for unrelated reasons for the last two days. Snippets here may help: https://stackoverflow.com/questions/52846769/solved-cannot-getobject-from-s3-bucket-policy-not-working

To summarise:

  • Keep the bucket private rather than public
  • Create an IAM user that has read permissions for the bucket
  • Use those credentials along with the AWS SDK and create a method. The method can handle the logic to determine whether the user is authorised, and if so, create and return a signed URL that can allow the user to view/download the object. This has the Slingshot like benefit of allowing a direct connection between client and S3 without clogging up your servers