How much server power for 1500 concurrent users each day?

Hi guys, New to meteor, very excited to use it for my new application. This app doesnt do any intense data crunching, just displaying the data and generate reporting.

Based upon your experience, I would love to know how much computation power i should provide for my app ?

Everyday there will be around 75k additional mongo data records created.

Each mongo object will contain 15 properties/ key value data, which are just strings . total of all strings will have 300 characters in total.

I can’t afford to have any issues after it goes into production.

question1. Does using socket.io persistent connection puts more load on the server ? ( of-course more io operations )

question1.a . Having more IO operations means more EC2 costs ?

question2. I read some stuff on graphQL, Is it true that if i use graphQL with meteor it will help with performance ?

question 3.
Can you please suggest how many ec2 servers woudl be necessary for backend computation in terms of
a. How many ubuntu servers ec2
b. Ram / memory and space.
c. anythign else.

question 4. Same as above for mongo server,
a. How many ubuntu elastic ebs ec2 servers ?
b. Ram / memory and space.
c. anythign else.

question 5. Any other related tips would be helpfull too.

Hi, guess there are not a simple answer, but anyway

  1. you could disable ws and use http requests instead w Meteor
  2. Depends on EC2 type
  3. Your task could be done in a field of application structure/ There are a lot of hints and solutions to make Meteor app more productive
  4. take a look at 3
  5. look at 3 again

I dont understand this, Your task could be done in a field of application structure/

  1. you could disable ws
    ws = web server ?

web socket connection

wow, i dont know how can disable ws and use http requests instead w Meteor ?

so dont use publish subscribe and just use http requests with axiom package ?

If you are serious about that, and just beginning development you would do very well to start with JMeter right away.
Here’s an 18 min. intro and overview video that you can put on 2x speed and miss nothing watching it all in 9 mins.

During development, periodically, you should rent a space in one of those pay-by-the-minute cloud VPS services, install jMeter in an image, clone and run ten copies of it simulating 300 simultaneous users from each one. jMeter will do a good job of showing you your limitations with a large selection of charts.

1 Like

a few projects ago I used DISABLE_WEBSOCKETS=1 env variable when run Meteor server. But I think its not a good idea. Anyway u should digg more and find an answer) Its a bit complicated to advice u because I know nothing about your project and tasks

lol my project aint even started, im just playing with it, getting login working , later on, the real project is to use react on front and have thousands of tasks loaded on the UI and people can change the data on those documents ( tasks ) and create comments and share those task objects among each other,

So feel free to tell me how i can disable ws, it woudl be nice to know what effect it will have. How are you using it ?

Speaking from personal experience, here.

Websockets and reactivity should be fine if you scale your deployments well. What do I mean by that?

  1. Multiple instances of production node bundles created by meteor build running on different ports.
  2. Load balancing these instances with something like Nginx and proxying them to port 80 or wherever else you want to serve your app.
    Doing it this way, you can scale anywhere from a relatively simple foundation (one host) to a whole server farm going through a proxy.

You also need to keep optimization in mind when you’re designing your application. Websockets and full-stack reactivity is great when you need it. Anywhere you use it where you don’t really need it, is wasting resources. Really, unless it’s critical to a feature’s functionality, you should stick to non-reactive fetches and/or API calls.

Also, if you’re calling your database with tons of reads for your dashboards and reporting, you might consider implementing an in-memory cache like Redis so you can mitigate the impact of slow db operations on your overall performance.

1 Like

WS is not a thing you should worry about. Its more important to mind how you can scale the app later.
Anyway you could build a prototype w all the Meteor out of the box features and release it for production very fast.
Then if your product is ok - you are possible to reimplement all the stuff with plain Methods or with Asteroid on the client or even try Apollo (GraphQL) and so on

2 Likes

I’ve been down this road a few times so I think I can help here (though my max number of concurrent users was around 700, about 4000 in a 2 hr period).

Meteor can handle more than 1500 concurrent users if it’s setup efficiently. The first problem isn’t necessarily scaling issues but cost per user (due to having many instances running).


> Does using socket.io persistent connection puts more load on the server ? ( of-course more io operations )

If you’re looking at websockets vs http long polling than websockets will be more efficient for both the client and server by eliminating overhead. Node can handle quite a few websocket connections and if the raw concurrent connection becomes the bottleneck then you can use different languages to alleviate that. Usually in a Meteor app it’s the RAM consumption (from a publication) that brings down the server not the raw connection limit.

I also wouldn’t turn off websockets unless there is a clear reason to. They will be much more efficient than long polling and diffing (you might as well have Apollo long poll instead if you’re not using websockets).


> question1.a . Having more IO operations means more EC2 costs ?

Typically this means a more expensive database plan which may not be on EC2. If you’re just using Meteor methods you can do quite a lot of IO. If you’re using a lot of heavy subscriptions than it can add up quickly, increasing costs

Don’t use Meteor subscriptions unless you have to. For example, if you don’t need the UI to update the instant a different user updates a record, then you can get away with much cheaper solutions such as a plain Meteor method or GraphQL mutation. If you need the UI to update because the user modified their own document you can typically do that very easily with Apollo.


> question 3. > Can you please suggest how many ec2 servers woudl be necessary for backend computation in terms of > a. How many ubuntu servers ec2 > b. Ram / memory and space. > c. anythign else.

Totally depends on each application. The only decent way to simulate it is to load test it, and with SPAs it tends to be fairly expensive. If you’re on AWS you can probably use an elastic cloud and scale up

Hope this helps :thumbsup:

1 Like

Is it 1500 concurrent (same time) users or 1500 users per day?

1 Like

No, 1500 concurrent is 1500 people using the app at the same exact time. 1500 per day might mean you have less than a hundred concurrent during a peak usage time (really depends).

With websockets it means you have 1500 subscriptions and an unknown amount of data going back and forth. It could mean that a leaderboard type of subscription causes all 1500 people to receive the updates at once, however typically a group of people will receive an update due to something else changing.

With Meteor methods or REST you would typically measure this in requests per second… for example how many people are requesting their user document at this very second in time.

Apollo is a bit different because it can be like REST where you only fetch data as needed, however if you subscribe to a feed of data it asks the database for results every X seconds times (up to) 1500 (each user). This is why long polling is less scalable in some cases.

I know what it means, but the thread title talks about 1500 concurrent per day. 1500 concurrent means plenty considerations, 1500 per day is not a problem for a single instance medium server

1 Like

Oh gotcha, I misread :slight_smile:

This thread may help. It’s about testing using distributed PhantomJS instances on AWS EC2. I was doing this to try to reproduce and solve a problem I had.

https://forums.meteor.com/t/poor-galaxy-meteor-performance-serving-small-bursts-of-users-load-test/38671