Download with Meteor HTTP, Upload to S3

I have a Meteor method that downloads images from Pixabay and uploads them to S3.

This works but looks rather ugly because I have to use node’s http and https modules along with Meteor.bindEnvironment.

I’m trying to replace this with the Meteor HTTP package but it’s not working. AWS doesn’t seem to like response.content. And turning it into a Buffer doesn’t work either.

Any ideas on how to fix this?

Edit: Added code example

  const libraries = {http, https};
  const protocol = url.replace(/^(.*?):.*/, '$1');
  const httpLibrary = libraries[protocol];

  httpLibrary.get(url, Meteor.bindEnvironment(response => {
    const contentType = response.headers['content-type'];
    const extension = contentType.replace('image/', '').replace(/;.*/, '');

    if (! _.contains(Files.extensions, extension)) {
      throw new Meteor.Error('bad-extension', 'The extension "' + extension + '" is not allowed.');

    let buffer = new Buffer('', 'binary');
    response.on('data', function (chunk) {
      buffer = Buffer.concat([buffer, chunk]);

    response.on('end', Meteor.bindEnvironment(() => {
      const metaContext = {cardId, directory: Meteor.userId(), name, type: 'image', extension};

      const fileObject = {
        Key: Files.getPath(metaContext),
        Body: buffer,
        ContentType: contentType,
        ContentLength: buffer.length,
      s3.putObject(fileObject, Meteor.bindEnvironment(error => {
        if (error) {
          throw error;
        }'insertFile', metaContext);

Looking into this further, when I console.log the buffer I created from node’s modules, I get

<Buffer ff d8 ff e0 00 10 4a 46 ...>

But when I log the buffer I created from Meteor’s package, I get

<Buffer fd fd fd fd 00 10 4a 46 ...>

So the first few bytes are different which causes a problem because files are identified by those first few by bytes.

1 Like

Not sure if my approach is any better, but what I do is:

  1. Grab the url on the client side.
  2. Pass it to, and pass the resulting base64 back to the client
  3. Convert the b64 to a blob (client-side)
  4. Upload the blob to s3 (client-side)

I try to put as much of the processing as possible on the client, but I’m relatively certain you can’t get the image data without heading to the server (because of allow-origin).

That’s a very clever workaround to the origin issue.

Is that faster than just doing it all server side?

It seems to me that sending the base64 back to the client would take roughly the same time as uploading it to S3.

Honestly, I’m not sure. We do all of our other image manipulation/uploads client-side, so it kind of made sense to just tie into the existing framework for this (not-very-crucial-to-us) functionality.

I was actually just adding the Pixabay image library to my app. Was researching ways to send an image from a URL to S3. I use the following workflow to completely bypass the server using slingshot for uploading to S3 (which I have successfully used for years for local selection of image uploads using the browser).

  1. Get the URL and metadata from Pixabay’s search API (client or server)
  2. Use XMLHttpRequest to request binary data from the client based on the URL
  3. Create a BlobFile that I pass to Slingshot
let url = "https://some.image.url";
let filename = "somefile.jpg";

let oReq = new XMLHttpRequest();"GET", url, true);
oReq.responseType = "arraybuffer";

oReq.onload = function (oEvent) {
   let arrayBuffer = oReq.response;
   // Create a JS Blob from the response
   let blob = new Blob([arrayBuffer]);

   // Create a JS File from the blob
   let file = new File([blob], filename, {type : oReq.getResponseHeader("Content-Type")});

   // Trigger the image upload event that starts my Slingshot workflow using manually created File
   Dropzone.forElement("#dropzone").emit("addedfile", file);

//  Send the request

It’s basically just downloading the URL-based image to the client from the browser, then sending that to Slingshot just as you would if they selected the image from their desktop.