Phoenix as a Meteor alternative

@SkinnyGeek1010 Thanks a lot. In my recent local Meetup, couple of well known devs came and they were hyped about using meteor for making mvps. Looks like they have some personal ideas they wanna flesh out in a short amount of time and its hard to beat meteor’s productivity.

But they all asked me the same question, scaling. And you were the first person I thought of. So my question is –

  • Do you think using Phoenix + React + Redux will be better vs Meteor by itself when it comes down to a project that has scaling needs ?

I just want to use the best tool for each job !! Not really up for this vs that supremacy race !!!

Also if Meteor can’t scale properly…what is this http://joshowens.me/how-to-scale-a-meteor-js-app/ ? I am a bit confused !

I’ve been going over these sorts of issues so great to have your practical perspective. ty

1 Like

It’s good to have another platform to learn. I might try it within this month and see where it leads me. My only concern though (and this is personal at least for me) is that it really looks like Sails JS for me, with routes and controllers and stuff and of course, I am a javascript kind of guy (and with ES2015, I am gonna stay a little while longer). Although I did love a modularized model, view, controller design… I began moving away from it since I left Sails JS…

Although I would love to see how real-time this is for me. Just some noob questions though, does it render React components on the server side as well? Can we do GraphQL stuff? How does “real-time” data work if at first glance with the routing and all, it still uses HTTP Requests?

2 Likes

Do you think using Phoenix + React + Redux will be better vs Meteor by itself when it comes down to a project that has scaling needs ?

It really depends on what scaling needs are. Most meteor apps can scale horizontally until the oplog gets backed up, then you have to get creative.

How many concurrent users will it need to support? or roughly how many per day?

A big factor is how much realtime do you really need? The more realtime the harder it is to scale. If you used only Ajax calls to fetch data you could scale Meteor 100x easier (or even Meteor methods). I was able to drastically reduce the amount of servers by using Meteor methods to fetch data.

How much can a subscription be re-used? For example the re-use rate of a blog is high because subscribing to post ‘123’ can be re-used to everyone. Fetching the details of the current user cannot.

Also getting enough traffic to have these issues is a great problem to have (none the less still a problem). On average being on HN for a day will yield ~50,000 visits. At peak you’ll have 200-400 concurrently and on average maybe 20-30. Getting features on news/blogs helps but it’s very spikey traffic. It will take a lot of work to buildup that traffic.

I guess my point is that for quite a while you can scale Meteor by taking bits of it out (like realtime).

Also if Meteor can’t scale properly…what is this http://joshowens.me/how-to-scale-a-meteor-js-app/ ? I am a bit confused !

That’s a great article! As he mentions Oplog helps a ton with CPU usage, def. use that. At a point oplog doesn’t scale but that takes a lot of writes to cause issues.

A simple blog can handle 700 concurrent users, which is nice but the publication re-use is high and there isn’t a lot going on (not a ton of comments overall). How many you can get per server depends on the app. One government site was using Meteor and could only get 10 users per server. Their bill was several thousand per month… but they didn’t care because it was free money. I’m sure this could have been optimized.

In short, if I was building an Amazon app I wouldn’t use Meteor. If I was building an app that had existing traffic (say I was building a v2.0) and it was thousands of concurrent users, I would not choose Meteor. The time you save up front will not matter once you’re dealing with that kind of traffic.

For most cases, just plug away with Meteor and build cool stuff :thumbsup:

2 Likes

That’s how I started out, as a weekend project. It ended up making me write (much) better JavaScript and completely turned my understanding of programming inside out (Elixir). I think learning Lisp would also do the same thing but for me Elixir’s syntax is so much more pragmatic. I would highly recommend Learn Elixir if you like screencasts and Dave’s Elixir book. The first 4-5 chapters are almost life changing :laughing:

Phoenix does have a router for the API and HTML routes, neither are required and neither of those are realtime. It does make a pretty mean API though!

Channels are how it handles realtime (via websockets by default). These are pretty low level now and Chris has some cool things lined up for the future to add more functionality. Pairing this with RethinkDB changfeeds is really powerful stuff. These are all handled outside of the router and are authenticated once on join.

Although I did love a modularized model, view, controller design… I began moving away from it since I left Sails JS…

Phoenix uses Ecto which has a really interesting way of dealing with data. Everything in the model is a pure function and it uses changesets to validate the data and return what should go into the database. Another layer actually persists the data so the ‘model’ can be used with or without a DB to verify data consistency. It’s hard to explain but it’s very different than Mongoose and Active Record.

The controllers are very much… controllers. One nice thing is they are just functions, they take in a connection and they mutate the connection and then return the connection. By convention all side effects happen in the controller, this leaves everything else to be a pure function, which is only interesting in the debugging case and testing… pure functions are as easy to test as an add(x, y) function.

Views are very different. They could be re-named serialization layer. With JSON they would allow you to munge or omit data to be sent to the client. with HTML they allow you to add helpers using data gathered in the controller (like blaze helpers actually). For HTML views there are ‘templates’ which take on that role.

So far i’ve only used Phoenix as an API so I haven’t used their servside HTML stuff.

Just some noob questions though, does it render React components on the server side as well?

Not that I know of. Though some Python libraries do this somehow. It seems like it would be easier to put a node server in front and have it handle that… fetching data via the Phoenix API. Personally I just bootstrap the data or serve the index page on a CDN.

Can we do GraphQL stuff?

There is GraphQL stuff available. Chris and Jose also talked about experimenting with a ‘GraphQL’ like system for Phoenix that takes the ideas from it but builds a tighter integration. (sort of like Falcor follows the same principals).

How does “real-time” data work if at first glance with the routing and all, it still uses HTTP Requests?

It doesn’t use http but here’s the gist (without a realtime db):

  • Robert enters chatroom:lobby
  • Joe enters chatroom
  • Joe types in Hello Robert and presses send
  • Joes client sends ‘new_msg’ to server with payload
  • server matches a function for 'new_msg' and it sends broadcast("msg", "chatroom:lobby", payload)
  • Robert and Joe’s clients have a JS on('msg', func...) callback which fires and adds the message to the chat window
  • All users in the lobby (both Robert and Joe) see the view refresh in real time
  • Robert types in “Hello Joe” and the cycle repeats

Things to note, there is no latency compensation, this has to be added in on the frontend if wanted, Redux makes this easy. The on callback doesn’t know how to add in the chat, it could just be some jQuery or Redux code there.

This makes for a very very simple yet very powerful system. It works across a cluster of Phoenix servers as they handle that for you.

If you’re using RethinkDB you wouldn’t have to have the server watch for a new_msg coming in from the client, the changefeed would callback and Phoenix would broadcast it out. This makes it more simple and more reliable.

3 Likes

Ahhh… so we’re going to use socket.io? I’ve used it before when I was still running Sails JS and Express JS… sweet.

Hmmmm this one I think is something that is still debatable: should the HTML be already in place when served or should it be rendered on the browser? But I do agree that if the HTML and renderer is already in the browser, then it would just need the data to be loaded.

As for routing, if let’s say I use React and also use React routing (or maybe Angular and Angular routing), would it clash with Phoenix Routing? (I had experience with Angular clashing with PHP routing…)

As for concurrent users… this one is a bummer for Meteor, and maybe because of the Oplog… although @arunoda’s benchmarks on using Meteor cluster gives hope to still be able to allow thousands of users… and I think on one use-case, I saw a project (forgot the name of the app) with 6000 users running on an 8-core machine (forgot the memory but I think it was 16 GB).

But just imagining using socket.io on Phoenix and maybe using Redis (which I have used before) or RethinkDB (really new for me), then you have to just use Phoenix to update all users who needs to be updated by doing this (correct me if I am wrong)

  • Robert updates data blog1
  • Phoenix sees the update and updates Sarah who is looking at blog1 using RethinkDB

The second part you have to code to send a broadcast with the new data. Right?

That’s what you will use on the client side, not socket.io.

There’s a good overview of how it’s all stitched together.

The important thing is that not only can you vertically scale with smp as in the blog, but horizontally scale as well. Multiple nodes in multiple data centers transparently and out of the box. It’s built on top of Erlang, which was specifically built to be distributed.

Elixir Distributed Messaging

1 Like

Ohhh I saw the phoenix-js file. This kind of explain things.Hmmmm… too bad I invested a lot already in Meteor (well maybe not a lot but well… I already bought things… :stuck_out_tongue:) But I am willing to learn and earn in Phoenix as well knowing that it somehow answers the scalability problem.

2 Likes

I watched that video, and in the end, his solution was basically using standard pub/sub procedures. Why would you have every client subscribe to things that they don’t need? As soon as you try to be smarter with your subscriptions (according to the video), you can easily do 1000+ concurrent users on Meteor.

It’s not that a client subscribes to things they don’t need, but that every write is observed and parsed by all server nodes regardless of it being pertinent or not to clients connected to that node.

Once your write level saturates the cpu from oplog tailing on one node then horizontally scaling buys you nothing. All nodes are equally saturated.

Hansoft’s custom solution involved using a Redis store as a message broker between the nodes and the database. If you log onto irc, Zimmy is quite open to discussing their solution.

4 Likes

Interesting, maybe I’ll have to head over there!

It’s definitely debatable! IMHO it’s a tradeoff with resources. If serverside rendering was cheap I think I would always render it. However, it’s extremely CPU intensive and you’ll need a cluster of Node servers to replace a single server with the same traffic.

There’s also the issue of complexity which costs money (more dev time) and less time shipping products. I find it to be non-trivial to fully serverside render the page.

If you have a large payload it’s very possible for the client to get the HTML quickly but when they click on a button it won’t do anything until the JS is parsed.

The other extreme is to server them an empty html file and let it build up. This can be cached on a CDN so that request can be very fast. If you have code splitting so the user is only downloading the JS for the ‘feed’ page for example, the total time to chat is fairly low.

To speed this up a bit, If you send this page on the server you can store ‘bootstrapped’ data such as the users feed posts and then you can prime the store with that before hitting the server.

A hybrid approach could render just the shell of the UI (with facebook as an ex. say the feed minus the posts) so that they see something and could have a loading spinner for the middle posts content. This could also have bootstrapped data so there is just a bit of time from initial render to posts.

If SEO is critical you could always server a cached static version to bots only with Nginx/HaProxy. Basically you have one node dedicated to crawling your own site and rendering static HTML files for every page, then when a bot comes along the load balancer redirects to this static version. This is easier to setup than SSR for me.

For me, it all comes down to… is it worth weeks of work to reduce pageload by 500 ms? Is the app behind a login? Could I get close by using code splitting? Is SEO critical? (Google can still crawl a Meteor app without spiderable, some others cant yet).

Routing can definitely clash if you don’t setup the backend routes appropriately. If you’re worried about routes clashing you can use a catch-all route to send all routes to the route that serves up your React/Angular app. For instance, at the bottom of your Phoenix routes file you can have something like this:

scope "/", ApiTest do
    pipe_through :browser # Use the default browser stack

    get "*path", PageController, :index
  end

The “*path” route will take any route and forward it to the index action of the PageController. If you serve your React app on this action then you shouldn’t have any trouble with routes clashing.

I hope that made sense… If your not familiar with the Phoenix router this may be a bit confusing. Let me know if you have questions and I’d be happy to help!!

2 Likes

Woohoooo thanks bud.

So if I understood it right…
/page-1 and /page-2 will be served in the front-end routing mechanism?

I don’t get the *path part. Is it /path?

I’m still new at Elixir and Phoenix and studying it would take time (I just got a book on Elixir yesterday).

1 Like

Well I am using Brad Frost’s idea of making reusable components on every part of the page. It would really work well if I use React (although I did it on Blaze templates… I just need to change my paradigm). But the thing is, is Phoenix CPU intensive on rendering HTML files and would it be a bottleneck?

It could be good to just put a blank HTML and just load a cached version of your minified JS renderer from a CDN. But what if the JS option is turned off? (That’s where I really wish React would be existing on the server side in Phoenix so that it will have some bootstrapped data in it already)

1 Like

In my own benchmark testing a 2GB - 2 core DigitalOcean box with Phoenix could server about 58-60k pages per min (w/o a DB) before exhausting RAM. If you get into that territory putting Nginx in front would free up more resources (same as with Meteor).

It could be good to just put a blank HTML and just load a cached version of your minified JS renderer from a CDN.

This is my preferred approach if the requirements allow it. That way you can deploy the frontend sep. and you get great performance around the world.

But what if the JS option is turned off? (That’s where I really wish React would be existing on the server side in Phoenix so that it will have some bootstrapped data in it already)

I have a noscript tag that tells the user they need to enable scripts on mydomain.com to use the site. It takes an enormous amount of work to get everything working without any JS (from my limited experiments). If the requirements need no JS then a serverside app with JS sprinkles is what I reach for (traditionally Rails but Phoenix would be great at that).

That being said if someone could make a React app with SSR that wasn’t much more work I would be down!

Actually I saw two Phoenix packages that are being done but are still in alpha stages (they don’t have JSX support as of now). Hmmmm… I guess I’ll stick to front-end for now given that I don’t have a project to work with and I am still experimenting.

1 Like

Ya so the *path is just the syntax for saying “match any route”. That is how you do wildcard routes in Phoenix. This essentially pushes all routes to your React app so that React Router or whatever router you’re using can takeover. You’d use this same approach with Rails or Laravel or any other backend framework with a built in router.

I heard from the author that Nginix is actually a bottle neck for Phoenix if you want to serve html faster :wink:

2 Likes

I made an app with Meteor (blaze, flow-router…), already got 2500 users work with cluster on 4 small instances. peak at max 100 sessions. CPU is max to 5-6%. DB is on Compose.io. All good so far.

The app will be massively used for an event in 1 month. I estimate 1500-2000 concurrent connexions for 1 day. And back to 100 after that.

I’m reading this topic from starts and I’m progressively freaking out with the meteor scalability.
I’m not an experience developper (1 year) and my app is quite basic but need real time.

Based on your experience and your many test (glad meteor community got people always challenging performance), What do you suggest the best way to handle this situation in a short time frame?

Meteor + Phoenix as Pub/sub?

Thanks for your input!

2 Likes