S3 Storage + 15 char min

I can’t seem to get S3 file upload working. Does anybody have a helpful guide/resource that is current? Slingshot doesn’t seem to work anymore.

Don’t have any guide. But I can confirm S3 loading using Slingshot works like a charm on both in our production env (1.6.1 meteor) and dev 1.8.1 meteor.

1 Like

I was able to get slingshot working by unchecking a bunch of mysterious boxes on AWS.


Total noob here, am I opening up my s3 bucket to anyone writing to it?

I’m gonna just mess around and run some tests but if anyone has some reading for best practices for this stuff I would really appreciate it. The slingshot docs didn’t mention anything about AWS policies aside from CORS, I’m wondering if something has changed on the AWS-side?

Thank you for the confirmation @perumalkuk

@jistjoalal I maintain a package for my projects with regular updates for the aws-sdk npm which also supports adding metas like ‘cache-control’ and ‘expires’. https://github.com/activitree/s3up-meta This package is an update of https://github.com/Lepozepo/S3. I got most of the documentation from Lepozepo so you can get the info from either repo. If you need help with implementing it, I am here.

{
	"Version": "2008-10-17",
	"Statement": [
		{
			"Sid": "AllowPublicRead",
			"Effect": "Allow",
			"Principal": {
				"AWS": "*"
			},
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::YOURBUCKETNAMEHERE/*"
		}
	]
}

Thanks for your help. Do you know how I can set a more restrictive bucket policy? Am I misunderstanding or does that allow any AWS user full access to the bucket?

I can’t keep straight between CORS policy, bucket policy, ACL, access keys… Amazon is not as easy as meteor :smile:

It would depend on your use case. How I use it is to upload profile avatars, post images, etc and get them via Cloudfront (Amazon CDN). In this case, only a signed(authorized) URL can write to S3 and you cannot write otherwise, from other source (signature is generated on your Meteor server), or retrieve it via S3. If I knew more of what you want to do, I could help you with various pieces of code or screen shots from my S3.

I’m working on a platform for sharing educational resources. The goal is for users to be able to upload images, microsoft office files, or other teaching resources and to delete them from s3 when the user removes the file from my site.

I got all of that working yesterday once I unchecked those boxes and finally was able to connect to my bucket. I’m still concerned about my lack of understanding about what that did :fearful:

Slingshot seems secure but I would still need to set a cap on user storage to prevent users uploading a million files to my bucket, right? Are there other security concerns I’m not aware of?

Thanks for the help.

Currently I restrict the size of uploads. Hoping that users do not load up my bucket.

I am thinking of some of the following to cap or reduce cost

  • move seldom used content to lower cost storage in S3.
  • put expiry date on content so that it automatically gets deleted.
  • charge based on storage

I would love to hear what others have done or planning to do

Has anyone implemented a progress bar with slingshot and react?

I can’t get uploader.progress() to return anything other than NaN

@jistjoalal if those documents are private you would need to use the Meteor Files (https://github.com/VeliovGroup/Meteor-Files) otherwise file URLs are pretty much public, unless you sign the documents so that only the author can download it.

I see upload progress comes on this.uploader.progress() (Blaze). In theory, if you bind

const uploader = new Slingshot.Upload("myFileUploads")

to the React component, you should be able to access this.uploader.
Try to do it in the constructor maybe like this, and then get the Slingshot on “this.uploader”:

constructor (props) {
    super(props)
    this.state = {}
    this.uploader = new Slingshot.Upload("myFileUploads").bind(this) // you can bind this so you can access "this" inside for things like set.state.
  }
// then you can possibly (maybe, if you are lucky, the day is sunny, it is not Tuesday or Friday, time is not 13:00 and you're not on the 13 etc etc ... no black cats either.) access
this.uploader.progress()

We have around 500 teams and about 20TB of storage currently being used. We sort of charge based on storage - 99% of our storage usage is from video, and our packages increase in cost as the video storage increases (and other features get added). Some things we noticed:

  • Signing each request using your server is a massive drain - you can cache the signature (if you set the expiry long enough) which helps, but if you have a lot of turnover of content, it doesn’t help enough. A better approach (if your structure allows it) is to use policy cookies - your CDN has a public key and your server has a private key, when a user connects you create a policy allowing them access to one “folder” of the CDN (either the users personal folder, or a teams folder, etc) and send that to the client as a cookie. If you have more users than content, this will not work well for you.
  • S3 allows you to transition files to IA class X number of days after creation - so if you see high usage immediately after creation, and low to no usage after, this is a good approach - however, you get charged a lot more to access files in IA. For us this didn’t work as our teams use files seasonally. S3 also has intelligent teiring, where they will automatically transition files from standard to IA based on a files last access date. However, if you have lots of small files this becomes expensive in its own right because they charge a per file administration fee, and a minimum storage size of 128KB exists for this class. We opted for a custom solution - because most of our files are HLS videos and we store the list of file parts in the DB and the user requests the playlist from our server - we know when the files were last accessed, and we have a list of all the files per video. When a video has not been accessed in 30 days we transition all the pieces to IA. The next time it is requested, we transition all the pieces back to standard.
  • s3 storage is cheap 2.3c per GB is already pretty good - even with 20TB of storage, we pay significantly more on EC2 costs than we do for S3 - so don’t worry too much about this until you scale to the point that you need to.

Thank you for the detailed information. It looks like I should not worry too much at this point of time.

We are using both AWS npm package to upload files to S3, and other times Filestack.js which has S3 integration. Works smoothly, can recommend Filestack for this.

Already promoted on this forum: https://uppy.io/