In fact, that’s the recommended way!
I can tell you I am running on 2 Digital Ocean $20 servers to keep Crater up and running. I am doing 500k page views a month (200 concurrent users around the clock) on that setup, and things seem to be moving along just fine.
No, it’s not moving just fine. It’s very laggy. I have to wait all the time when I’m using crater: until it show me comments, when clicking on left menu (Trending, Recent, Best), even when I’m doing ‘like’ on article.
I’s really not that fast, even this forum is much faster. Interesting, how many page view we do have on forum? and how many servers?
So, does Meteor really scale? And how much this scaling will cost?
I think a more formally written version of this article wouldn’t be a bad fit for the Meteor guide, since it’s an often recurring question in the forums. A go-to reference would prevent us having to type the same answer (= There’s no real answer, blablabla) over and over again.
It’s exactly my point!
It seams that RoR projects scale better…
It’s very depend on the use case, whats is your real-time requirements, re-use of data between clients and so on. But there are proven techniques that help meteor scale well,
Those of you who are using Meteor at scale or considering moving to Meteor and afraid from its scale problem are invited to join the discussion Should we allow custom ddp messages ? (a way to scale meteor apps) and to watch https://www.youtube.com/watch?v=H_NgPmJHC_E#t=20m47s
Every language and every framework has different requirements. RoR projects do not scale better than Meteor projects. It’s a matter of what you do to ‘scale’, and how much resource you pour into ‘scaling’ your project / app.
Resource-wise, if everything was the same (concurrent users, server hardware and setups (except where appropriate), I can pretty confidently say that Meteor requires much less resource than RoR (by experience), but then, everything being the same is pretty much impossible. Even the same app will have different requirements throughout its lifetime (or even through the day).
Needless to say, generalizations are not always correct.
The server-side requirements of a project that is used 9 to 5 (say, an intranet), and a project open to public that spikes every evening around 8 (but otherwise mostly free) will have very very different requirements.
What I’m getting at is that noone can say RoR scales better without knowing the exact server setup behind the project.
It is pretty normal an 4xi7 8GB RoR setup serving 20 concurrent users being fast and a 1xi7 1GB server serving 10 concurrent users being ‘slow’. The numbers are just for an easy comparison of course, but you get the idea.
My point is, Meteor apps do scale, and pretty easily. If you have setup your ‘scaling’, it’s as simple as adding another droplet/server to your setup.
One more point I was mention that it’s also usually more terms of code you wrote that frameworks possibility to scale. And shouldn’t we discuss not is Meteor scalable or not but more about how write Meteor code which will be easier to scale.
These aren’t scaling issues, Kadira reports the average pub/sub response time is under 300ms. The issues you mention mostly relate to how the client side code is structured and how it works. I spent a little over 2 weeks trying to optimize the latest version of Telescope and fixing the rampant spam problem at the end of December. I just don’t have the time to produce content, keep up with client work, and work on Telescope at the same time.
The site has done 500k pageviews at peak in the last 60 days or so. Scaling on the server side will cost $45, but fixing client side issues cost a lot more. As with any ‘scaling’ effort, it is the code that matters, not the platform, that was the entire point of the article.
So, here we agree that the main problem of Crater.io (and I believe many other apps) is code we write and not framework or server configuration limitations.
Would be interesting to learn more about Telescope client side code problems and how you plan to fix them.
BTW, thanks for time you spend by writing such articles
So I think the big ones revolve around the code that runs when you ‘upvote’ or ‘submit a post’, the meteor method code running on the client seems to take a while and it makes the site feel slow. I guess fixing that latency compensation piece to feel faster would likely help a good bit.
I haven’t had time to dig into individual post slow loading yet, so not sure there.
The other big issue is that the ‘postsList’ subscription has a low observer reuse because it has a timestamp in the query. I talked to @sacha about it and we agreed to add a new ‘upcoming’ status to future posts and run a synced cron job to flip them to posted once the date hits. I just haven’t had time to do it. Pretty sure that would be a huge speed gain on the front page and would likely drop memory usage on the server a good bit too.
I have also found this to be true, but again it comes down to the code more than anything. Phoenix claims it can handle 2 million connections with Channels, but that number likely goes down as you make complex calculations happen on the server side. Every framework has tricks and ways to help scale up to more users. Meteor happens to lean heavily on Oplog atm.
Good Article! Thanks for posting it Josh! As mentioned above it will be interesting to see what possibilities open up with GraphQL and all the new databases and system architecture it brings with it.
Sure, thanks for reading it
Yeah, I’m in the same boat. Didn’t have time to manage support, bugs, docs, features, etc. and take an in-depth look at the performance issues at the same time.
@sashko in the Meteor guide regarding smart components, I noticed the mashing of reactive data sources in this one helper:
Template.Lists_show_page.helpers({
listArgs(listId) {
const instance = Template.instance();
const list = Lists.findOne(listId);
const requested = instance.state.get('requested');
return {
list,
// we pass the *visible* todos through here
todos: instance.visibleTodos.find({}, {limit: requested}),
requested,
countReady: instance.countSub.ready(),
count: Counts.get(`list/todoCount${listId}`),
onNextPage: instance.onNextPage,
// These two properties allow the user to know that there are changes to be viewed
// and allow them to view them
hasChanges: instance.state.get('hasChanges'),
onShowChanges:instance.onShowChanges
};
}
});
Is there any difference if you broke all those into separate helpers instead of putting them all in one helper?
You would need to jump through some hoops to make the reactivity more fine-grained in this instance, since those are all passed in as one data context object to the child template. Splitting up the helpers wouldn’t …help.
One way to do that would be to make the properties of the data context functions instead of properties, or make that data context object a reactive dict, which will make each property individually reactive.
@tmeasday, what do you think? In this case, would it be worth it to split these data sources up to avoid extra recalculations?
By the way, I think this is off-topic, since scaling usually refers to the server side. Perhaps we should start a new thread about Blaze reactivity management.
Yeah, trying to get around to those when I can. Maybe in March, haha.
any plans to add a section on scaling to meteor guide?