Mongo Auto Field Level Encryption Package

The client side auto FLE available in Mongo 4.2 is awesome - unfortunately it is only available on atlas, or clusters running their enterprise software.

I’ve written a meteor package znewsham:auto-encrypt that can similarly be configured to automatically encrypt/decrypt values based on a provided schema (will work on integrating with simple-schema next).

The package is simple to configure in the simple case (e.g., single tenant systems where one key is used to encrypt all data, and the same fields for all documents should be encrypted) but powerful and flexible enough to be used in multi-tenant systems where you may require a different master key, data key, algorithm, or even schema on a per-tenancy (or any other) basis.

import { EncryptedCollection, patchCollection } from "meteor/znewsham:auto-encrypt";
import crypto from "crypto";

const masterKey = crypto.randomBytes(96);

const encOptions = {
  keyVaultNamespace: "meteor.keyVault", // you are responsible for ensuring a unique key on this collection on keyAltNames field
  // not suitable for production - use aws
  kmsProviders: {
    local: {
      key: masterKey
    }
  },
  masterKey,
  provider: "local",
  keyAltName: "myKeyName", // creation of this key is automatic - though you can use an existing one as well
  algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic",
  schema: {
    field: true,
    "object.inner": true,
    "array.$": true,
    "anotherArray.$.inner": true,
    "wild.*": true,
    anotherObject: {
      inner: true,
      another: true
    },
    entireObject() { return { algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } }
  }
};

const collection = new EncryptedCollection("myCollection", encOptions);
// or patchCollection(existingCollection, encOptions);

after this initial setup, any calls to find, findOne, update, insert or remove that queries, or modifies these fields will have those values automatically encrypted going in and decrypted coming out.

There is currently no support for aggregate or distinct - partly because Meteor does not natively support these anyway, but mostly because it’s going to be complicated (at least for aggregate) to implement!

10 Likes

Did you do some kind of telepathy? I was just searching for some alternatives to Mylar (which is very old) and now you come up with this :smiley:

This is a little different to mylar, which I’ve also played with I have also played with as this doesn’t provide a mechamism for sharing encrypted data over different keys. Though Id guess it would be possible to have multiple keys with the same key data encrypted by different master keys.

But this is really just a meteor version of the mongo field level encryption :slight_smile: still super useful though. I’m currently using it for encrypting the shared secret for 2FA and api credentials.

Thanks for this package, it was definitely helpful but in our application we were inserting docs into our collection using the native mongodb driver package so we’re already circumventing the insertion part and thus the find/fetch decryption didn’t work. Also we wanted to use AWS which requires a tiny bit of different configuration.

So here’s the code for anyone who wants to do it vanilla nodejs/mongodb within Meteor as of version METEOR@2.5:
package.json

...
"aws4": "^1.11.0",
"mongodb": "^3.6.10",
"mongodb-client-encryption": "^1.2.7",
...

./settings.json

"kms": {
    "accessKeyId": "...",
    "secretAccessKey": "...",
    "masterKey": "...",
    "region": "..."
  }

./server/csfle/helpers.js

import { MongoClient } from 'mongodb';
import { ClientEncryption } from 'mongodb-client-encryption';

module.exports = {
  CsfleHelper: class {
    constructor ({
      kmsProviders = null,
      masterKey = null,
      keyAltNames = 'aws-data-key',
      keyDB = 'encryption',
      keyColl = '__keyVault',
      schema = null,
      connectionString = process.env.MONGO_URL,
      mongocryptdBypassSpawn = false,
      mongocryptdSpawnPath = 'mongocryptd',
    } = {}) {
      if (kmsProviders === null) {
        throw new Error('kmsProviders is required');
      }
      if (masterKey === null) {
        throw new Error('masterKey is required');
      }
      this.kmsProviders = kmsProviders;
      this.masterKey = masterKey;
      this.keyAltNames = keyAltNames;
      this.keyDB = keyDB;
      this.keyColl = keyColl;
      this.keyVaultNamespace = `${keyDB}.${keyColl}`;
      this.schema = schema;
      this.connectionString = connectionString;
      this.mongocryptdBypassSpawn = mongocryptdBypassSpawn;
      this.mongocryptdSpawnPath = mongocryptdSpawnPath;
      this.regularClient = null;
      this.csfleClient = null;
    }

    /**
     * Creates a unique, partial index in the key vault collection
     * on the ``keyAltNames`` field.
     *
     * @param {MongoClient} client
     */
    async ensureUniqueIndexOnKeyVault (client) {
      try {
        await client
          .db(this.keyDB)
          .collection(this.keyColl)
          .createIndex('keyAltNames', {
            unique: true,
            partialFilterExpression: {
              keyAltNames: {
                $exists: true,
              },
            },
          });
      } catch (error) {
        console.error(error);
      }
    }

    /**
     * In the guide, https://docs.mongodb.com/ecosystem/use-cases/client-side-field-level-encryption-guide/,
     * we create the data key and then show that it is created by
     * retreiving it using a findOne query. Here, in implementation, we only
     * create the key if it doesn't already exist, ensuring we only have one
     * local data key.
     *
     * @param {MongoClient} client
     */
    async findOrCreateDataKey (client) {
      const encryption = this.getEncryptionClient(client);

      let dataKey = await client
        .db(this.keyDB)
        .collection(this.keyColl)
        .findOne({ keyAltNames: { $in: [ this.keyAltNames ]}});

      if (dataKey === null) {
        dataKey = await encryption.createDataKey('aws', {
          masterKey: this.masterKey,
          keyAltNames: [ this.keyAltNames ],
        });

        return dataKey.toString('base64');
      }

      return dataKey._id.toString('base64');
    }

    async getRegularClient () {
      const client = new MongoClient(this.connectionString, {
        useNewUrlParser: true,
        useUnifiedTopology: true,
      });

      return await client.connect();
    }

    getEncryptionClient (client) {
      const encryption = new ClientEncryption(client, {
        keyVaultNamespace: this.keyVaultNamespace,
        kmsProviders: this.kmsProviders,
      });

      return encryption;
    }

    encryptValue ({ encryptionClient, value }) {
      return Promise.await(encryptionClient.encrypt(value, { keyAltName: this.keyAltNames, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' }));
    }

    decryptValue (
      { encryptionClient,
        value }
    ) {
      return Promise.await(encryptionClient.decrypt(new MongoInternals.NpmModule.Binary(Buffer.from(value))));
    }
  },
};

./server/csfle/index.js

import { Meteor } from 'meteor/meteor';
import { CsfleHelper } from './helpers';

export const csfleHelper = new CsfleHelper({
  kmsProviders: {
    aws: {
      accessKeyId: Meteor.settings.kms.accessKeyId,
      secretAccessKey: Meteor.settings.kms.secretAccessKey,
    },
  },
  masterKey: {
    key: Meteor.settings.kms.masterKey,
    region: Meteor.settings.kms.region,
  },
});

./server/main.js

import { csfleHelper } from './csfle/index';

Meteor.startup(async () => {
  const client = await csfleHelper.getRegularClient();

  await csfleHelper.ensureUniqueIndexOnKeyVault(client);

  client.close();
});

./server/methods/secrets

import { MongoInternals } from 'meteor/mongo';
import { MongoID } from 'meteor/mongo-id';
import { csfleHelper } from '../csfle/index';
...

Meteor.methods({
'insertSecret': async ({ name, token }) {
const { dbName } = MongoInternals.defaultRemoteCollectionDriver().mongo.client.s.options;

    const client = await csfleHelper.getRegularClient();
    const encryptionClient = csfleHelper.getEncryptionClient(client);

    const secretCollection = client
      .db(dbName)
      .collection('secret');

      await secretCollection.insertOne({
        _id: new MongoID.ObjectID()._str,
       name,
        token: csfleHelper.encryptValue({ value: token, encryptionClient }),
      });

     client.close();
},

'getSecretToken': async function({ _id }) {
    const { dbName } = MongoInternals.defaultRemoteCollectionDriver().mongo.client.s.options;

    const client = await csfleHelper.getRegularClient();
    const encryptionClient = csfleHelper.getEncryptionClient(client);

    const secretCollection = client
      .db(dbName)
      .collection('secret');

      const secret = await secretCollection.findOne({_id});

    let result = csfleHelper.decryptValue({ value: secret.token, encryptionClient });
    
    client.close();
  return result;
}
})

These days I always find myself using the native mongodb driver and not the Meteor mongo package, I honestly think they should deprecate it or something.

Resources:

1 Like

I don’t think inserting via native should mean that finding through the meteor driver wouldn’t work, so long as you set it up to point to the same decryption keys it should work fine, what was the error you were getting?

The purpose of the package isn’t just to wrap around fle, but to emulate auto fle, where you specify which fields you went encrypted and all places that the collection is used will automatically encrypt/decrypt those fields ad appropriate - it’s interesting to see how much simpler the implementation is without that requirement!

Regarding needing a different conifg for aws, you’d of course need to specify aws as the readme suggests :slight_smile: the regular mongo fle documentation describes how to do this.

I can’t imagine them deprecating the meteor version, at least not for a long time as all the redis, collection2 and collection hooks stuff would stop working (without equivalent wrappers)

I don’t think inserting via native should mean that finding through the meteor driver wouldn’t work, so long as you set it up to point to the same decryption keys it should work fine, what was the error you were getting?

I don’t really remember, who knows maybe it did work fine but something was messed up on my end.

The purpose of the package isn’t just to wrap around fle, but to emulate auto fle, where you specify which fields you went encrypted and all places that the collection is used will automatically encrypt/decrypt those fields ad appropriate - it’s interesting to see how much simpler the implementation is without that requirement!

Your package is definitely a lot more advanced and allows the developer to not deal with intricacies. I didn’t mean to disregard it or anything.

Regarding needing a different conifg for aws, you’d of course need to specify aws as the readme suggests :slight_smile: the regular mongo fle documentation describes how to do this.

I might be wrong but AWS wouldn’t work with your package since aws4 is missing.

I can’t imagine them deprecating the meteor version, at least not for a long time as all the redis, collection2 and collection hooks stuff would stop working (without equivalent wrappers)

I guess they won’t but I was speaking more towards the everyday Meteor devs who would have much easier time developing apps if they were to use the native mongodb driver instead of Meteor’s. Also, if you were to use the native mongodb driver, you’d end not using collection2 or collection hooks and using Node.js alternatives like mongoose. Keep in mind, that you’re mostly forced to do so as Meteor doesn’t support transactions and you’d have to use .rawCollection to do it and many other stuff. So sooner or later you, you always end up not using Meteor mongo package at some point.

Hmm, I am using it currently with aws - I probably wouldn’t include aws as a dependency in the package as it would bloat it for people using local or GCS - but since I use aws elsewhere, the package is there, here’s my config:

{
  keyVaultNamespace: `${Meteor.users.rawDatabase().namespace}.keyVault`,
  kmsProviders: {
    aws: {
      accessKeyId: Meteor.settings.aws.keyId,
      secretAccessKey: Meteor.settings.aws.secret
    }
  },
  masterKey: {
    region: "ca-central-1",
    key: Meteor.settings.aws.kmsKeyId,
    ...(Meteor.settings.aws.kmsEndpoint && { endpoint: Meteor.settings.aws.kmsEndpoint })
  },
  provider: "aws",
  keyAltName: "key",
  algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Random",
  safe: true,
  schema: {
    "config.smtp.password": true,
    "config.imap.password": true
  }
}

Funny you mention transactions, I’ve got a package to help with that too (not ready for public release) - but yes, I wish the meteor driver was a little more passthrough, e.g., wrappers that make things synchronous, but every option gets passed through as is - not being able to pass in a session is irksome.

I’ve never worked with mongoose before, does it provide the hooks/redis type functionality - I thought it just handled the schema portion?

1 Like

Huh, that’s weird. lmao, I guess I need to redo everything again with your package :sweat_smile:

Please add the previous portion to your README, I’m sure it’ll help some people out.

Funny you mention transactions, I’ve got a package to help with that too (not ready for public release)

If it’s not ready yet, you can create Github gist to explain how you do it.

I’ve never worked with mongoose before, does it provide the hooks/redis type functionality - I thought it just handled the schema portion?

Hooks: Mongoose v6.1.2: Middleware
I don’t really understand what you mean by redis but I guess you can do some caching using both.
mongoose-redis - npm
Building Cache Layer Using Redis and Mongoose - DEV Community

mongoose is collection2, collection hooks and simpl-schema built into one and it’s written in TS so you don’t have to add some types package. Again, another reason for me to slowly deviate away from some of Meteor’s ways of doing things.

Sorry I have a bad habit of using redis as a synonym for redis-oplog, which hooks into the meteor collections, so by using raw, you’d lose the reactivity - which isn’t a problem for some projects but is for others.

In the past I’ve written a node package that wraps the driver to support either synchronous (fiber based) or redis oplog notifications or both so you get the same behaviour in that regard for shared code. In hindsight I wish I’d gone the other way and converted the meteor code for that portion to async.

While I’m unsure if there’s a node equivalent to redis-oplog, maybe the rise of change streams could render it obsolete.

I looked into change streams a couple years ago. The problem at the time was that mongo fell over when you had too many (relatively few, perhaps 20) so you’d be reduced to observing the entire collection. Hopefully that has improved with subsequent versions of mongo though.

1 Like