S3: signed URLs vs query string authentication

I’m working on a CRM application.

Users have a list of customers. They can store attachments/documents on the customer profile. I’m saving this in an s3 bucket as of now.

I want to make sure only logged in users can access the URLs to download them.

I was going to use signed URLs, but I also saw query string authentication.

Does anyone have a recommendation on which to use? Or maybe there’s a third alternative I’m not aware of?

A third option would be to pipe the request through your application server (client-server-s3-server-client). AWS sdk allows streaming I think, so you can pipe directly into your response. That is the safest option I think. However, then you have the load on your servers.

1 Like

Meteor-Files is the most secure solution I know of. It also helps to set restrictions to logged in users so they can keep documents private from one another.

If you have lots of files to serve, maybe it’s easiest to create a signed cookie? That way, you can also use CloudFront to reduce S3 costs and improve response times.


1 Like

This is what we did. @paulishca mentioned Meteor Files, which may be a better option today. I don’t recall exactly why we rolled our own, but we weren’t able to use that package at the time (not sure if streaming was even an option back then. It was a while ago.)

At any rate, if it’s useful, the code is below. We used WebApp.connectHandlers.use() to create the endpoint, s3-streams to retrieve the file, and then write the file out to the client. One niggle worth noting was the handling of UTF8 filenames. Without that, we ran into the occasional filename that would result in a white screen on some browsers (probably older versions of IE, but I honestly don’t recall.)

One reason we went with streaming through the app was to obscure the source of the files, as it allows us to use our own URL. Just makes a slightly better user experience if they’re not suddenly seeing an odd looking domain in the address bar.

Edit: ignore the token variable in the below. That’s just used internally to track file downloads, and is set in the omitted authentication section.

import AWS from 'aws-sdk';
import S3S from 's3-streams';

import iconv from 'iconv-lite';

WebApp.connectHandlers.use('/files/applications', function filesApplications(req, res) {

  // authentication bits snipped

  // get the s3Key for the file
  const fileRecord = _.findWhere(application.candidateFiles, {
    name: file.data.fileName,
    fileCategory: file.data.fileCategory,

  const s3Client = new AWS.S3({
    region: Meteor.settings.public.awsS3BucketRegion,
    accessKeyId: Meteor.settings.AWSAccessKeyId,
    secretAccessKey: Meteor.settings.AWSSecretAccessKey,

  const getObjectOptions = {
    Bucket: Meteor.settings.public.awsS3Bucket,
    Key: fileRecord.s3Key,

  const src = new S3S.ReadStream(s3Client, getObjectOptions);

    .on('error', Meteor.bindEnvironment((err) => {
      res.statusCode = 404;
      res.end(`File not found: ${err.statusCode.toString()}`);

      Logger.error(`S3 error. Token: ${token}, ${err.statusCode.toString()}`, { data: file });
    .on('open', Meteor.bindEnvironment((object) => {
      const buff = new Buffer(file.data.fileName, 'utf8');
      const filenameISO88591 = iconv.decode(buff, 'ISO-8859-1');
      const filenameUTF8 = encodeURIComponent(file.data.fileName);

      res.writeHead(200, {
        'Content-Type': object.ContentType,
        'Content-Disposition': `inline; filename="${filenameISO88591}"; filename*=UTF-8''${filenameUTF8};`,
        'Content-Length': object.ContentLength,
      Logger.info(`Delivering file. Token: ${token}`);
    .on('finish', Meteor.bindEnvironment(() => {
      Logger.info(`File pipleine closed. Token: ${token}`);
    .on('error', Meteor.bindEnvironment(() => ((response) => {
      response.statusCode = 404;
      response.end('File not found');

      Logger.error(`Error delivering file. Token: ${token}`, { data: file });

I’ll second @arggh - CloudFront is the best approach from a load perspective. If you’re using direct S3 access from the client, your server will still need to generate a signature for every URL you want to request - which can add a lot of pointless load to your server.

With CloudFront you generate an access policy, which defines the URL prefixes that a user is allowed to access - if you’re storing files per user in a specific directory e.g., /{userId}/{fileId} you can generate one policy per user on login, sign it on the server and send it to the client. Then that client will be able to access all files which match the policy.

1 Like