S3 file upload - is slingshot still usable ? alternatives?

I suggest you create a method on the server to use minio client to upload files to S3, this way you can protect your keys.

The workflow would be as follows:

  • Create client function to handle file upload and get the base64 for the selected file, then pass it to the server via Meteor.call

  • Create a server method and use minio client to handle the upload.

1 Like

Sign uploads on the server and upload from the client: https://github.com/activitree/s3up-meta

1 Like

thanks for the clarification. If we are passing the base64 to the server doesn’t it defeat the purpose of direct upload from client ?

You can definitely upload from the client, but your AWS keys will be exposed to the public, if you’re fine with that, then go a head.

When you publish your project your code will be minified and your keys will be hard to spot.

I’m working on project designed for < 50 users therefore that was my approach.

Another way, Maybe what you want is to call method to the server to generate a pre-signed URL the is short lived and can be used by the client to upload directly.

1 Like

Can’t afford to display aws key, especielly for file upload, and I would really advice you not doing so whatever the project size is.

Things can get very costly :slight_smile:

Other solution are great as they use the aws key to get the signedUrl on server side, give it to client who can then upload. So I’ll go for one of the other proposed solution I guess. Thanks a lot though!

2 Likes

Your package could do the trick as well I guess, I think you mentioned it on stack overflow as well. It’s just a bit worrying always to use a package with so little “fame” to work on a quite big project, it always feel less safe even though it may be a mistake to think so. I’ll have a look at it too.

Also I think you just publish it on npm.

Hi @rjdavid,

I’ll try implementing it in the next few days, though the documentation is quite lite (API is there but not much example) and I’m not sure what method to use where. I may need your help in the near future but I’ll explain my progress here and if it succeeds write a little “how to meteor / evaporateJs”. If you have a mini sample code of what method you use where (create to get the signedUrl on the serve after you use the config on server as well ? passing it back to client and then add to upload the file on client side using the signedurl ? ). It surely would ease the process :smiley:

Sorry but I do not have a mini sample code. Our code is part of a more complex component handling all our uploads. But the process should be simple. When you load the component for file select, you also prepare the config for evaporatejs. During this time, you send the signing parameters to the server, server sign it, and you include the signed url to the parameters of the function

Did a quick check and I was able to extract this:

getSignature = (signParams, signHeaders, stringToSign, signatureDateTime) => {
    return new Promise((resolve, reject) => {
      const inputs = {
        stringToSign,
        signatureDateTime
      };

      methods['aws.get.signed.url'].call(inputs, (error, result) => {
        if (error) {
          reject(error);
        } else if (result) {
          resolve(result);
        }
      });
    });
  };

When creating the Evaporate instance, there is a customAuthMethod wherein you pass this function

Here is the sample server code to create the signature

const hmac = (key, string) => {
          const chmac = crypto.createHmac('sha256', key);
          chmac.end(string);
          return chmac.read();
        };

        const { stringToSign, signatureDateTime } = params;

        const accessKey = Meteor.settings.private.s3.awsSecretKey;
        const dateStamp = signatureDateTime.substr(0, 8);
        const regionName = Meteor.settings.public.s3.awsRegion;
        const serviceName = 's3';

        const kDate = hmac(`AWS4${accessKey}`, dateStamp);
        const kRegion = hmac(kDate, regionName);
        const kService = hmac(kRegion, serviceName);
        const kSigning = hmac(kService, 'aws4_request');

        const signature = hmac(
          kSigning,
          decodeURIComponent(stringToSign)
        ).toString('hex');

        return signature;
2 Likes

Sounds great, thanks a lot for your effort. So if my understanding is correct (sorry if silly question but this process is new to me)

  1. you get a signedUrl from your server (using a meteor method) when loading the component (your getSignature function)
  2. you upload to s3 from the client using evaporateJs and the signedUrl you got back from the server.

Question:
What can’t we use the aws-sdk to create the signedUrl on server side ? your hmac function is having a similar role ?

Unaware of a similar function in the sdk. Inform me if there is one so that we can use that instead

not sure if it fits the purpose though

The only “fame” that one would need is in the pretty new version of the AWS library. Normally we, people working on big projects, read the code before it goes in the project.
This is the package with more “fame” and I think it has been recently updated after a long time while the concept is the same, sign server send client.
https://github.com/Lepozepo/s3-uploader

1 Like

I didn’t mean it as an offence and I started to look at your code. I am weighing the possibilities. But as you mention “we, people working on big project” I think it’s kind of normal to check the age of the library, number of uses stability, maintenance, activity in the issues, etc… Any library has to start one day, I agree, but I wouldn’t build a project with only new libraries either.

Hey @paulishca

Finally on a new project I started using your library, doing great job, thanks a lot. Left a few question / remarks in the issues on gitlab, also I am ok to post an example simple implementation if you think it is relevant (or example folder?)

Are you considering to publish the package ?

Hi Ivo, the library is great, I use it intensively. I’ll have a look in the repo to see your comments. As I remember, I have some customizations in it and some things left unfinished and this is the main reason why I use it as a private NPM while de code is public. Will take a look at it.

1 Like

@paulishca

May I ask you how you manage the security on the AWS S3 side ? The permissions side is always a wonder for me :slight_smile:

I am really not confortable with this. I don’t know if we should have the " Block public access (bucket settings)" turned on. But if it’s on I can not upload anymore. But I’m getting security warning from AWS. What I’d like:

  • to allow upload from users
  • have the media read public

It works now, but I feel like nothing is secured. I have the following settings in permissions tab:

Block all public access : off

Bucket policy

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::my-bucket/*"
        }
    ]
}

CORS:

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST",
            "GET",
            "PUT",
            "HEAD"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

ACL is default with bucket owner having objects list, write and bucket ACL, read,write

@ivo if you have private documents, you will probably need to use something like ostrio:files. If you just need a CDN, eventually with signed URLs, you drop files in S3 and get them via CloudFront.
When you link your CloudFront to S3, you have an option (in Cloudfront) to only allow access to files via CloudFront. When you check that, the Bucket Policy gets updated and all links to S3 files become useless.

CORS:

[
    {
        "AllowedHeaders": [
            "Authorization",
            "Content-Type",
            "x-requested-with",
            "Origin",
            "Content-Length",
            "Access-Control-Allow-Origin"
        ],
        "AllowedMethods": [
            "PUT",
            "HEAD",
            "POST",
            "GET"
        ],
        "AllowedOrigins": [
            "https://www.someurl.com",
            "https://apps.someurl.com",
            "http://192.168.1.72:3000",
            "http://192.168.1.72:3004"
        ],
        "ExposeHeaders": [],
        "MaxAgeSeconds": 3000
    }
]


Bucket Policy:

{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
       // Next one, by allowing Cloudfront, disallows everything else
        {
            "Sid": "1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity xxxxxxxxxxxxxx"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your_bucket/*"
        },
        {
            "Sid": "S3PolicyStmt-DO-NOT-MODIFY-1559725794648",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::your_bucket/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "aws:SourceAccount": "xxxxxxxxx"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3:::your_bucket"
                }
            }
        },
        {
            "Sid": "Allow put requests from my local station",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::your_bucket/*",
            "Condition": {
                "StringLike": {
                    "aws:Referer": "http:192.168.1.72:3000/*"
                }
            }
        }
    ]
}

1 Like

Thanks a lot for this feedback.

I didn’t connect cloudfront. I manage cache directly from the S3, what’s added value of adding cloudfront other than cache ? distribution of files ?

Do you have somewhere a tutorial to connect cloudfront to s3 or is it pretty straightforward

Edit: and do you have to change how your file are accessed if you add cloudfront ? I am getting the uploadUrl from the s3 upload, will I need another url ?

Cloudfront is a lot faster in all selected regions, on high volume is much cheaper. Also make sure you save progressive images instead of interlaced where possible. They tend to “come forward” rather than load from top to bottom.

If you have a lot of files (tens of thousands) it is good to organize them in the bucket for faster accessibility. AWS has to search for them too. Let’s say, don’t do bucket/images/xxx.jpg and dump everything in there. It is better to use for instance the user’s id to create folders.
Check this example. You have the bucket name, ‘p’ for posts, the userId, and within that folder all images from posts made by that user.
Screen Shot 2021-11-30 at 5.33.58 PM

You will have a url for your Cloudfront : https://xxxxx.something…com You associate this url with something like files or assets.yourapp.com and this becomes the Cloudfront root for your S3 bucket.

S3: https://your_bucket.s3.eu-central-1.amazonaws.com/p/bgNRLN3XxetRtyskE/05a90d2a-d691-4a89-8312-c4a12f413045.jpeg
CDN: https://assets.yourapp.com/p/bgNRLN3XxetRtyskE/05a90d2a-d691-4a89-8312-c4a12f413045.jpeg

If you create that root as a global variable you can then access it all over the app like

url={`${IMAGES}/p/bgNRLN3XxetRtyskE/05a90d2a-d691-4a89-8312-c4a12f413045.jpeg`} or
url={`${IMAGES}/p/${userId}/${imageUrl}.jpeg`}
1 Like