My experience with MongoDB Atlas - high data transfer usage

Just wanted to let everyone know my experience with MongoDB Atlas and see if anyone else has the same experience. I’ve been running my DB through MLab for the last couple years on their shared plan. This has been fine while my total DB size was around 1GB (3GB total counting the replica set), but with some new added features to the app the DB has grown to about 2GB (6GB total) which was causing me to have to compact the DB often to keep it under the 8GB limit. The next tier on MLab was $180/mo, over twice what I was paying, plus they still were not on mongo 3.4. I’d hears some good things about atlas, where I could get 40GB total space for $.1/hr for a 3 replica set, which was about equal to my MLab bill. Moving over was pretty easy using their built-in tools. Relaunched all my microservices with the new DB without a hitch.

The problems started almost immediately though. The first day I was hit with a $5 “AWS Data Transfer” fee. Atlas claimed that I used over 66GB of data on that day alone at $.09/GB. I never saw this kind of data usage with my app before. The app itself consists of 5 microservices and a handful of client connections. It does update data on a minute-by-minute schedule, but only a handful of items. The next day was worse. Atlas reported that I had used 99GB of data transfer for $9. The next 129GB and the next four days it averaged in between 99-120GB a day. The first week I had wracked up $50 in data use charges. At that rate my use would have been double or triple a dedicated plan on MLab was. I asked Atlas about it but they said it was my app was generating that data.

Out of desperation I created my own 3 member replica set on Digital Ocean using their $20/mo server (3X for $60 a month) which was a lot easier than I though to set up (in retrospect I should have done this sooner). Same level of security as my Atlas deployment. Moved my DB over. Atlas reported a max thruput on the database at over 10MB/s, with an average of around 3.5MB a second (as reported using MongoDB Compass). On my DO deployment the max as like 500KB/s, with an average of only 61KB/s. Many factors lower than Atlas was reporting and more inline with what MLab was reporting.

For now I’m going to stick with my own deployment. Much cheaper and backups don’t cost and arm and leg. It doesn’t have all the fancy reporting tools that Atlas has, but performance is the same or better.

Has anyone else seen these kind of data transfer fees on Atlas. I want to recommend it to other clients, as I like that it is on a current version of Mongo and their toolset is better than MLab, but can’t in good conscience do it if data usage fees will kill them.

6 Likes

I have experienced the same, with a small DB being used for an app in staging that has only been tested very sparingly. One month recently my bill shot up 300% with these phantom transfers. I will be engaging with Atlas customer service soon and will try to follow up here.

2 Likes

We’ve had Atlas on our possible upgrade list for a while now so always good to read about other people’s experiences. Thanks for sharing that @cspecter and @rtcarroll07, if they do get back to you with an explanation I’d love to hear it.

3 Likes

I had Atlas contact me yesterday after I had cancelled my account. They wanted a post-mortem on what went wrong. It would have been nice of them to show that much interest when I was having the problem and still on their system instead of just telling me “the problem is with your app”, but better late than never. I couldn’t really do much but let them know the setup on my app and the kind of performance I was getting on my new setup. Comparing the readings using their MongoDB Compass tool (which is great and let’s you connect to any MongoDB install) of my new DO deployment vs. what Atlas was telling me is like night and day. My DO deployment is averaging around 64k/s including data between the replica set itself. Atlas was reporting 10MB/s, several times larger. I had migrated over from MLab where I had only used about 20GB of data transfer for the whole previous month whereas Atlas was reporting like 99GB/day.

Long story short, they don’t know what the problem is. They seemed to think that it was related to Meteor, but I don’t see how that is possible without the same problems showing up on other hosts. That or it was some interaction with Digital Ocean, which also seemed wrong. I feel like it is a mis-configuration on their part. If you plan on using Atlas I would be very cognizant of your data usage, and check it every day. But, to tell the truth, it was so easy to set up my own replica set on Digital Ocean that I can’t see using any of these MongoDB hosts for anything but the largest or most demanding deployments in future projects.

@cspecter I have been using Atlas and Meteor since Nov 2016 and have not seen these types of issues. We have a heavy workflow of batch processing that lives outside our Meteor deployment with a database size of roughly 3GB. We moved over from compose due to similar issues you mentioned at the top of the thread. We are fully deployed in AWS and leverage Atlas’ VPC Peering for db connection. If it helps, our current deployment of Meteor is 1.4.1.1

I am on release 1.5. Maybe it really does have to do with Digital Ocean interacting with AWS, as they suggested. I’ll have to check that out when I get a chance.

We just tried migrating a Mongo database into MongoDB atlas and ran into all of these same issues.

Our database is literally 2 MB at the moment. Yet, we got charged for 1.139GB of data transfer two days ago and 2.425GB of data transfer yesterday despite barely anyone using the app.

When we opened a support ticket this is what we were told:

As is described in the Data Transfer section of the Billing documentation, we charge for each hour of Atlas server usage only when servers are active (and at a reduced rate when they are subsequently paused). A group with no deployment (zero servers running) will not incur any charges. The hourly cost per server varies based on instance size, disk speed, region in which the instance resides, and the cloud service provider the cluster is deployed to. There is also a baseline level of communication between the nodes in your deployment that is used to maintain high availability, which will also factor into your Data Transfer costs even if you have no active operations being performed.

Within MongoDB Atlas, we have various tools that are monitoring each cluster and are communicating with our Atlas application instances that are hosted in AWS in the us-east-1 region, which accounts for the ~13KBps rate of network activity that you can see in the YOUR_DATABASE Network metrics for yesterday when the cluster was active. Accumulated over the course of 24 hours, this adds up to about ~1.139GB of data transfer (which matches the usage shown in the project’s Usage Details).

As far as I can tell, we seem to be getting charged for both the servers in the replica set communicating with each other, plus the monitoring tools that make the Atlas dashboard possible. No wonder y’all have seen such extreme amounts of data usage.

2 Likes

NodeChef MongoDB hosting does not charge for data transfer, offers advanced features and cost effective.

1 Like

Excuse me, I don’t know any experience with Mongo Atlas.
The price for create new one Cluster is for one database name.
Or we could create more database on One cluster???

Yeah, that is about what I experienced. I’ve tried out all the major MongoDB hosts and all have their flaws/extra costs. As I said in my OP, I’ve found the best solution so far is to roll your own replica set on DO, at least depending on your project, DB size and client. Some clients are willing to eat the added expense, which is fine, but the project I was working on above had some pretty large data sets (financial data) that would cost several hundred (or more) a month on most of these services. It didn’t really take that much effort to get it running and you get so much storage and bandwidth for cheap. You don’t even have to forego a good management UI as Mongo’s Compass App works with all Mongo deployments. I liked Atlas’ features, but their customers must only be large companies with all that added expense vis-a-vis the data transfer fees.

1 Like

Have you tried out the ScaleGrid MongoDB hosting platform?

  1. No data transfer charges
  2. No backup costs
  3. Full admin/oplog access
  4. Digital ocean support
  5. Bring your own account on AWS/Azure

(Disclaimer - I am the founder of Scalegrid.io)

3 Likes

FWIW, my production app switched to Atlas in January. We have a specific use-case where we need to crank up to their higher performance servers at times (something we couldn’t do on Compose).

I do see the AWS Data Transfer (Same Region) and (Internet). The “Same Region” last month was 193.71GB @ $0.0100 / GB = $1.94 for the month (pennies a day). The “Internet” was 0.06GB @ $0.0900 / GB = $0.01 for the month. It’s always around this same cost each month. My app doesn’t have a ton of data transfer, so this seems reasonable.

My production databases is 429MB. Total disk usage of the cluster is 5.8GB. Running on the M30 size.

Clocking in the same issue in 2019.

I have a little staging server with just me testing. Total DB size around 12KB (dumped files). Daily DB Transfer 3GB

Switching to an unmetered service like the two linked above.

Just checked my Atlas and found the same issue for last month as well. An app, that is used only by me sparingly, averages 3.5GB transfer per day.
I’m thinking if this isn’t related to oplog by a chance.

Yes I tried scalegrid, I do have services/support with AWS etc. i am always happy with all the support i get from various vendos

Scalegrid is the worst of all, I do not think they support from U.S, based on the response, attitude i get from Scalegrid.

I was charged more than $330.00 , i was cheated by Scalegrid at the time of database creation i was informed it is $120, i created shard cluster, nowhere it was mentioned they will be charging $190 per
totally misleading and fradulent also i asked then more than 1 week ago, still there is no response, Also i asked another question response. It seems reviews about scaledgrid are misleading, I might initiate fraudulent action against scalegrid with my credit card company .

Scalegrid proudly says they provide 24 hour support, i think they are trying to say that there is 24 hours in a day.

First provide atleast 8 hours say, they should be ashamed to say they provide 24 hour service

Atlas charges you for all your traffic, so if you have a replica set you get dinged for all the traffic between all those servers. If you have a large DB (mine was 4GB at the time) it can cost you a bundle. $200-$300 monthly bills on small projects just didn’t make sense, not even for clients. I ended up rolling my own Mongo servers on DO, which was much less expensive, but a management nightmare. That is what finally soured me on Meteor and Mongo.

I ported all my projects over to Firebase/App Engine now. Pure Node on the backend, or serverless. React on the front end. While it’s more expensive than DO for running App Engine servers, it’s so much easier and cheaper on the DB and authentication side of things. Plus their serverless solution has improved a lot and free ingress and egress in the same region. My DB is now a combination of a server(s) running LokiJS backed by Cloud Firestore. Super fast queries.

I like Mongo a lot, but could never find a hosting solution that could deal with a DB of any size. It’s great for small project with < 500MB DB, but anything larger and you get killed on network traffic. Any they def don’t care about the little guys, mainly going after large corporate clients.

1 Like

Question to all here, where have your Meteor servers been hosted when this has been happening? Is it on Galaxy/something else?

And/or do you all use Redis Oplog (in case this is oplog related)?

I’m wondering if there may be a common denominator to this…

1 Like

A quick note for any future people who end up here – it does seem to be the specific Atlas/Digital Ocean/Meteor cocktail that screws things up, and Atlas’s support is clueless regarding what’s happening (and doesn’t seem to realize that something strange is happening).

My experience was a series of 5 meteor apps (hosted on Digital Ocean), running off of a single mLab database (hosted on Azure). The standard data output was 50 KB/s.

I transferred to an Atlas database (hosted on AWS), and with no other changes to my apps, the data output jumped to 1.3 MB/s, and stayed there. No changes to my application usage, just 26x the reported traffic.

My solution was to migrate my apps from Digital Ocean, over to AWS EC2 instances. As it turns out, this solved the data-output issue – I’m running the same 5 apps, with the same load, and the database is back to reporting roughly 60 KB/s of outbound data.

6 Likes

This is quite interesting and weird :astonished:

1 Like

Am I right that it’s safe to use Atlas with Galaxy? Can someone confirm this combination works?