File Uploads - edgee:slingshot - still good? better options?

That is great to hear.
Right now has there been any modification to your fork of slingshot?
I am using the other library and it works fine.

We ended up rolling our own version of slingshot specifically for AWS to support multi part uploads (and by extension retries) but we needed support for > 5gb uploads. Implementing the signing is not trivial and AWS’ documentation on this isn’t great (the case of the variable names is critical and not always documented correctly) and it’s not always the incorrect request that fails, but the final “complete” request.

We then use either CloudFront to connect with signed policy cookies (more efficient than directly signing each url) or custom per session temporary credentials for direct s3 access (CloudFront doesn’t support encrypted s3 objects sadly). The url signing is pretty easy.

We currently use slingshot for client side uploads and it do have private uploads, you can just set the param per directive. Then we just get signed urls from the server after validating that user has permission to access the file.

Just putting Slingshot out there to our users and I thought it was great! Great docs and makes for writing clean code to handle uploads

I needed this in a non-Meteor project recently, and found that it’s actually pretty easy to create a signed URL using the s3 NPM packages now:

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'

const client = new S3Client({
  region: 'eu-west-3',
  credentials: {
    accessKeyId: process.env.S3_ACCESS_ID as string,
    secretAccessKey: process.env.S3_ACCESS_KEY as string,
  },
})

const command = new PutObjectCommand({
  Bucket: 'yourbucket',
  Key: key,
  ContentType: 'image/png',
  ACL: 'public-read', // change this if you don't want the file to be public
})

const url = await getSignedUrl( client, command )

When you have the URL on the client, it’s a simple matter of doing:

await fetch( url, { method: 'PUT', body })
2 Likes

I didn’t add anything new, merely looked at the outstanding PR’s and merged them in, and did some basic tests to make sure it still works. All the PR’s were pretty straight forward, no major changes.

2 Likes

Thanks for the info, sounds good! I just saw that one of these PRs ensures compatibility with Meteor 2.3 and up. Wasn’t aware about this breaking change in the package API.

One additional question: Is there a special reason why issues have been disabled on your branch? Since it is officially recommended by the Community Packages, I think issues should be collected there.

I wasn’t aware that issues were disabled by default. I turned them on now.

1 Like

Hi cstrat,

I used Meteor-Files just some days ago and I could see it is a very good package. In its very own description it says one of the features of the package is, precisely, that is well maintained.

I did not used their upload feature. I just needed to build downloadable links for some files that were in the server. So, there is a way to save the files that are already in the server and then, you can generate the links, from client or server, to download the files.

So, yes, I feel I can recommend this package. It seems it works very well. It’s well documented and one think I could sense was that it tries to be a simple to use package.

Hope this helps …

Thanks for the post @smrsoftware - my particular use-case is for end users to upload photos. So bypassing my meteor server makes a huge amount of sense where possible. I am still on the fence about whether I need to worry about signed URLs or not…

On one hand if someone has the URL, they could save the content and share just as easily as sharing the content itself. The URLs are not incremental in anyway so it isn’t as if someone could just attempt to load every single file in the bucket (well I mean they could try - not sure if S3 would rate limit/block that person). Even if they did that they might find an image but no metadata connecting it to a user/account.

On the other hand, more secure is always better right!

There is signed URLs for putting and getting, you can choose where to use it. For uploads it is a very nice way to go, just generate a (temporary) signed URL and let the user upload straight to it. For getting files I wouldn’t bother for your use case. I don’t know how long your URLs are but if you want to make it even more secure you can put them in a folder named after the userId. Guessing a URL then becomes virtually impossible.

Signed get object URLs I would use for things like sensitive PDF documents and such.

1 Like

Hi,
I was wondering, how can i change the default storage-service URL ?

|Actually, I was wondering, how can I use the slingshot with Minio, in which Minio is compatible with the S3 API ?|

|Actually, I was wondering, how can I use the slingshot with Minio, in which Minio is compatible with the S3 API ? thank you

I have no idea what you are refering to.

I had the same question myself, and was about to make a mod to support it, but on looking closer, I discovered that it’s already supported.
If you are using the S3 Api directly, you can specify the Minio endpoint like this

AWS.config.credentials = {
  accessKeyId: settings.private.S3_ACCESS_KEY_ID,
  secretAccessKey: settings.private.S3_SECRET_ACCESS_KEY,
  region: 'ap-southeast-2',
  endpointL 'http://localhost:9000/'
}

and if you are using slingshot, and have a private Meteor setting S3_ENDPOINT, define a bucketURL function like this

const bucketUrl = function (bucket, region) {
  if (Meteor.settings.private.S3_ENDPOINT)
    return Meteor.settings.private.S3_ENDPOINT + bucket
  var bucketDomain = 's3-' + region + '.amazonaws.com'
  if (region === 'us-east-1') bucketDomain = 's3.amazonaws.com'
  if (region === 'cn-north-1') bucketDomain = 's3.cn-north-1.amazonaws.com.cn'

  if (bucket.indexOf('.') !== -1) return 'https://' + bucketDomain + '/' + bucket

  return 'https://' + bucket + '.' + bucketDomain
}

It seems to work ok. The above function is a modification of the default bucketUrl function that Slingshot provides.

I know, but that can change in the near future, besides that, slingshot dose not support function like signed URL…

Seems like this conversation 2 years ago stopped at the edge of what I wanted also: slingshot with minio … so I will try the @mikkelking way and report back. Thanks for voicing this @kheireddine

Alright, got to this same-day, and this is verified to work, with caveats:

  • Key for me was to set region with a function, since I did not conform my minio instance to AWS region names… so the constructor was throwing an error if I set region to the actual String, versus return a String from a method.
  • And to set bucketUrl to https://FQDN/bucket_name myself. That was where it was failing for me before, with the region issue being the next blocking issue after that.

I suspect I could go back now and not use S3Storage and use a manually created service, but it works, so I will deal with being perfect later. Just want to say, 2 years later; yes, slingshot works, at least on the @mikkelking I am using ( thanks! ) and with minio self-hosted.

Never thought to use S3Storage directly and just hack its configuration. That was the break-through versus the “roll your own!” documentation comments. From there, it was just getting around gotchas like mentioned with using a function with a String return, versus use a String that would not pass validation.

Would say this gets put into README.md since minio is awesome and ought to be the hoped implementation, if not the most popular yet. I would also try this with DigitalOcean Spaces and Telnyx Buckets, etc. All S3 compatible, but not self-hosted like minio would be usually.

To answer this quickly for any new comers, here is the short version:

There is an updated version of this package: mikkelking:slingshot

If you are starting from scratch, then you might want to check out: ostrio:files

If you have specific upload provider be sure to check out their SDK as well.


I have looked into this question as part of my help desk series, if you like what I’m doing please sponsor me.