Phoenix as a Meteor alternative

So write the app in Meteor then when you hit #1 in product hunt hire geniuses to rewrite it in whatever. Pheonix, Go, C, Assembly, etc.

Facebook started out on PHP because it was simply a comfortable language. It could have been written with C even back then and would have started with better performance. None of that matters. It’s more important that you can add features and evolve your product quickly in the beginning. Once the time comes you could even invent your own framework to solve your specific problems.

1 Like

In general I agree that you shouldn’t micro-optimize for things that may never happen. However, going too far in that direction can be dangerous. I naively thought that MDG would fix scaling before I hit issues. I’m only offering anecdotal evidence from my own experiences but here’s my take:

I had the same attitude when I thought that I would only have a few hundred concurrent users. However, business requirements changed before I even got the project launched and now I need to handle several thousand concurrent users from the start and within a year tens of thousands of concurrent users are reasonable (due to the customer driving their own traffic to the app).

This puts me in a pickle. It’s also the reason I made the meteor_elixir repo, as a spike to prevent re-writing the whole app before it’s even launched. It basically just cuts out subscriptions and delegates that to Phoenix while the rest is handled by Meteor.

Second use case:

An app built in Meteor. The data transfer is very minimal/optimized because of the mobile restraint. We thought we wouldn’t have a lot of traffic but alas, one day we had 400 concurrent users blasting the system. Crap I had to spin up 4-5 instances to keep it from crashing (5 keeps 404 errors to little/none). 5 - 1gb servers cost $285 on Modulus and I couldn’t host on DO at the time (mup didn’t exist).

In this case Meteor is able to keep up but it was not efficient and expensive. It was for a ‘free’ app so no revenues to pay for hosting (that’s another topic :laughing:).

I was able to reduce costs by cutting realtime on some features and using long polling every 30s (even though getting a new chat message instantly would be a better experience).

Phoenix is more efficient for these use cases and even though it’s more work upfront, over time it pays off. There’s no doubt that you can build an app faster with Meteor, there’s a tradeoff to be made. No free lunch.

11 Likes

@SkinnyGeek1010 I wonder if it makes sense to just build a wrapper in Node that emulates Meteor for Methods and Publications but speaks to a Phoenix layer instead of doing OpLog, Mongo, etc.

A framework built on Pheonix that fires up on the server and listens via API on which database entities to create, monitor, etc, and passes back the data to the Meteor server which then processes it and streams the data down the wire. If the bottle neck is OpLog processing this would solve the issue, no?

The winner of the hackathon made an app that pulls data on rest end points and deliver a ddp connection.

Here there is no pulling here. We are listening to channels between servers (Phoenix and Meteor) and returning a ddp websockets connection from Meteor on the server to client.

Can’t it be completely automated, requiring developer thinking strictly in JavaScript, with the proper compliment written on Phoenix?

1 Like

While spiking out this solution I was trying to do something similar. One of the solutions was to subscribe to Phoenix on the meteor server in the publish, and use low-level methods to send it down the socket.

However, you’re opening yourself up to a myrid of race conditions and limiting how much one server can scale. Node is really great at stateless requests like a JSON API but as one of the video from earlier showed, it’s hard to scale stateful websocket connections.

It’s quite possible to serve a million concurrent users with one Phoenix server and a few Meteor servers (enough to basically log in the user and return their profile). Offloading the initial Meteor index.html to a CDN can free up Meteor resources (i’ve benchmarked ~67 pages a second on a DO 1gb box to just server the index page).

If you proxied Phoenix through node you’re meteor cluster count would skyrocket because you can only scale so high vertically with Node.

Erlang was built from the ground up to handle this so it makes sense to let it handle it. Ironically this makes the system much much more simple than trying to proxy it though node/meteor.

At the end of the day I decided that all the extra work doesn’t buy you anything and doesn’t make life easier. Both ways require a knowledge of enough Elixir and Phoenix to setup a channel and query data ( a bit less if you’re using Rethink as it will push new changes down).

On the meteor client side all you need to do is:

// we'll use a local collection to store incoming data
Chats = new Mongo.Collection(null);

  channel.join()
  // prime minimongo with last 20 chats on join
  .receive("ok", resp => {
    resp.initialChats.forEach(doc => {
      Chats.insert(doc);
    });
  })

  /// ...

  channel.on("new_msg", doc => {
    // upsert because messages we sent will result in duplicates if we insert
    Chats.upsert(doc._id, doc);
  })

The amount you’d have to learn is minimal and it’s far easier than trying to write your own reddis pubsub to scale livequery. Like I said, no free lunch :smile:

5 Likes

Thanks for the details. I still don’t see why not “cross that bridge when needed.” Like you said the server end of Meteor is often fairly straight forward. It’s great that Phoenix is there and it would be even better if there was a simple easy-breezy way to scaffold a Meteor-Phoenix project where you write your publications and methods on Phoenix and subscribe in Meteor using the same /Client /Server folder structure and then when you “Foobar Deploy” it does what it needs to do.

I don’t mind learning Erlang at all, I’m sure its a fine language.

Is there a sample project with Phoenix as the backend and Meteor on the front with instructions on how to get the Phoenix part up and running on Mac? I’d give that a try for sure!

[quote=“Babak, post:273, topic:13519”]
Thanks for the details. I still don’t see why not “cross that bridge when needed.”[/quote]

Most meteor projects will never get to that bridge. If they do, it’s not a realistic expectation to monetize those few hundred users and hire ‘geniuses’ to rewrite your app from scratch in the few short days before performance complaints on twitter kill it…

Two backends simultaneously is somewhat silly. Choose one, use occam’s razor on the other. Skinnygeek had to try it as his project was too far along.

If you chose node because of skillset limits, use one of the myriad of npm libraries that integrate redis/postgres/rethink pub/sub feeding reactivity through Redux/React on the client.

That path will let you at least scale horizontally into the thousands before it explodes as you need to share state between nodes through socket.io. This limitation was discussed in the video Skinny mentioned.

Link to video

For brand new apps where scalability is a requirement, Phoenix on the backend and Redux/React on the frontend is the path of least resistance/time/money/complexity.

4 Likes

A lot of the time waiting until you get there is just fine. I just chose wrong. Twice. :smile: For most people the efficiency/cost will creep up faster than no service. That’s an easier path to have as migrating would only cost more $ and not losing users. I guess my main point was to plan ahead and leave enough buffer for the unexpected (that was my problem).

and it would be even better if there was a simple easy-breezy way to scaffold a Meteor-Phoenix project where you write your publications and methods on Phoenix and subscribe in Meteor using the same /Client /Server folder structure and then when you “Foobar Deploy” it does what it needs to do.

You can almost do this. Meteor doesn’t have any scaffolding generation but it’s very fast in Phoenix (with passing tests!). Once you learn how it works you can wire the two together in less a couple of mins. Deploying is as simple as a heroku push… it’s actually easier than Meteor to deploy (on Heroku, live code patching with Exrm is a bit tricky at first).

Is there a sample project with Phoenix as the backend and Meteor on the front with instructions on how to get the Phoenix part up and running on Mac? I’d give that a try for sure!

If you use the meteor_elixir repo above you could pull that down and run mix deps.get to pull in packages and then mix phoenix.server to get it running. However, since Elixir is not installed yet that can be installed here. It also required mongo to be running in the background mongod. Once it’s running then you can meteor run and the chatroom should be working :smiley:.

However, it may be easier to run through the up and running guide first as this will get a hello world app running (albeit with Postgres by default).

5 Likes

Why do you sound so condescending? Do you have a human communication ‘limit’ or what? Re-read your comment and see if it sounds convincing, if it does, stop talking to me.

Reddit itself has pretty much crashed a few times, people still use it. Twitter too. I’ve had Reddit crash my server before. No big deal, just scaled up and that was that.

Facebook was written with PHP because it was convenient. Github could have been written with in C++, but it was written with Rails. Instagram started with Django. So what?

Being lazy can be a programmer’s virtue, this is one reason why getting start with Meteor was wisely made easy. I’m not going to go jump through hoops to learn Phoenix especially with people like you trying to promote it.

Maybe you guys should streamline your presentation and get the elevator pitch in order and stop compensating for that marketing ‘limit’ by attacking people who actually warm up to the idea of trying your Meteor forum spam product.

3 Likes

I know all about Oplog and its “limitations”. You shouldn’t assume someone doesn’t know about something just because they don’t mention it.

You’re also the one who responded to me in point of fact (trolling?), and is now resorting to Ad Hominem.

Very mature!

Whatever…Phoenix Good, Meteor Bad. I get it.

I’ll just stick to the facts that they both have identified use cases thanks.

2 Likes

Some of these guys don’t sound like guys you’d want to have a beer with.

3 Likes

Any thoughts on meteor vs phoenix pros/cons for a p2p type app?
1-1 or 1-small group , not 1-1000’s++

2 Likes

Cool, thanks, good info.

2 Likes

It’s hard to say without knowing more of the business requirements. Would the p2p app be like a chat app or something similar where messages are passed around? Or realtime updates? Would an SQL or graph database (like Neo4J) be a better fit than a document database?

Also if you have a large amount of 1-1 type of publications that can eat up resources too, though not for a while.

If the app was more of a startup where you’re trying out a new product to see if there’s market fit I would almost certainly choose Meteor because you can iterate quickly to see if it’s going to work, and likely it will change a lot.

However, if one knew each framework equally well, and also was really quick with a non-blaze frontend (like React/Angular/etc…) then I think it would be a draw. React + Redux replace Blaze + Minimongo, Channels replace publications (and are more simple IMHO), Ajax (or channel messages) replaces methods.

Some areas that don’t overlap are authentication and Mobile support. No easy cordova support without Meteor but you have easy React Native support with the existing Phoenix API.

Authentication is tricky. If you need anything other than a standard username/password or Facebook Oauth then you’ll spend more time doing it with Meteor than with a lower level framework.

It was frustrating to build an SMS pin code signup/login with Meteor because auth is so hidden. You can’t just work with the hooks very easily (in Metoer) and they’re not documented well.

If you haven’t used JWT tokens yet (they’re new) than it will take some time to get used to it but it’s very very similar to Meteor’s _loginToken that’s stored in local storage. You login, the sever hands you a token and you save it in storage and pass it back with each request to prove the user is logged in.

I’m getting close to the point where I could build a React/Redux/Phoenix app as fast as a Meteor app. However, if you’re building a throw away prototype or hackathon app… Meteor will win hands down (meteor add accounts-ui is hard to beat! unfortunately none of my clients want to use it)

Sorry for the rambling but hopefully that answers your question!

5 Likes

Not rambling at all. Thanks for your reasoned and intelligent perspective.

2 Likes

Two more things I just thought of that might tip the scales toward Phoenix.

If you enjoy the ‘Unix way’, Unix piping, and (pragmatic) functional programming, the serverside might be easier to write in Elixir (dare I say more fun). This functional style could be a love it or hate it thing so that’s something to consider.

To play devil’s advocate, Elixir is new and there isn’t a library for everything like in NPM so you may need to write some packages yourself.

The second thing is micro-services. If that’s your jam then OTP will be your best friend. It’s the most impressive piece of machinery i’ve seen in a while.

I tried using headless Meteor apps for microservices and I regret it. The 80% use case works great but the edges will burn you. It’s just not setup for that.

I personally think it’s better to build a monolith that is abstracted out enough so that switching to a microserivce is only a few hour job. Having several services just means more work in the beginning.

2 Likes

@SkinnyGeek1010 Thanks a lot. In my recent local Meetup, couple of well known devs came and they were hyped about using meteor for making mvps. Looks like they have some personal ideas they wanna flesh out in a short amount of time and its hard to beat meteor’s productivity.

But they all asked me the same question, scaling. And you were the first person I thought of. So my question is –

  • Do you think using Phoenix + React + Redux will be better vs Meteor by itself when it comes down to a project that has scaling needs ?

I just want to use the best tool for each job !! Not really up for this vs that supremacy race !!!

Also if Meteor can’t scale properly…what is this http://joshowens.me/how-to-scale-a-meteor-js-app/ ? I am a bit confused !

I’ve been going over these sorts of issues so great to have your practical perspective. ty

1 Like

It’s good to have another platform to learn. I might try it within this month and see where it leads me. My only concern though (and this is personal at least for me) is that it really looks like Sails JS for me, with routes and controllers and stuff and of course, I am a javascript kind of guy (and with ES2015, I am gonna stay a little while longer). Although I did love a modularized model, view, controller design… I began moving away from it since I left Sails JS…

Although I would love to see how real-time this is for me. Just some noob questions though, does it render React components on the server side as well? Can we do GraphQL stuff? How does “real-time” data work if at first glance with the routing and all, it still uses HTTP Requests?

2 Likes

Do you think using Phoenix + React + Redux will be better vs Meteor by itself when it comes down to a project that has scaling needs ?

It really depends on what scaling needs are. Most meteor apps can scale horizontally until the oplog gets backed up, then you have to get creative.

How many concurrent users will it need to support? or roughly how many per day?

A big factor is how much realtime do you really need? The more realtime the harder it is to scale. If you used only Ajax calls to fetch data you could scale Meteor 100x easier (or even Meteor methods). I was able to drastically reduce the amount of servers by using Meteor methods to fetch data.

How much can a subscription be re-used? For example the re-use rate of a blog is high because subscribing to post ‘123’ can be re-used to everyone. Fetching the details of the current user cannot.

Also getting enough traffic to have these issues is a great problem to have (none the less still a problem). On average being on HN for a day will yield ~50,000 visits. At peak you’ll have 200-400 concurrently and on average maybe 20-30. Getting features on news/blogs helps but it’s very spikey traffic. It will take a lot of work to buildup that traffic.

I guess my point is that for quite a while you can scale Meteor by taking bits of it out (like realtime).

Also if Meteor can’t scale properly…what is this http://joshowens.me/how-to-scale-a-meteor-js-app/ ? I am a bit confused !

That’s a great article! As he mentions Oplog helps a ton with CPU usage, def. use that. At a point oplog doesn’t scale but that takes a lot of writes to cause issues.

A simple blog can handle 700 concurrent users, which is nice but the publication re-use is high and there isn’t a lot going on (not a ton of comments overall). How many you can get per server depends on the app. One government site was using Meteor and could only get 10 users per server. Their bill was several thousand per month… but they didn’t care because it was free money. I’m sure this could have been optimized.

In short, if I was building an Amazon app I wouldn’t use Meteor. If I was building an app that had existing traffic (say I was building a v2.0) and it was thousands of concurrent users, I would not choose Meteor. The time you save up front will not matter once you’re dealing with that kind of traffic.

For most cases, just plug away with Meteor and build cool stuff :thumbsup:

2 Likes

That’s how I started out, as a weekend project. It ended up making me write (much) better JavaScript and completely turned my understanding of programming inside out (Elixir). I think learning Lisp would also do the same thing but for me Elixir’s syntax is so much more pragmatic. I would highly recommend Learn Elixir if you like screencasts and Dave’s Elixir book. The first 4-5 chapters are almost life changing :laughing:

Phoenix does have a router for the API and HTML routes, neither are required and neither of those are realtime. It does make a pretty mean API though!

Channels are how it handles realtime (via websockets by default). These are pretty low level now and Chris has some cool things lined up for the future to add more functionality. Pairing this with RethinkDB changfeeds is really powerful stuff. These are all handled outside of the router and are authenticated once on join.

Although I did love a modularized model, view, controller design… I began moving away from it since I left Sails JS…

Phoenix uses Ecto which has a really interesting way of dealing with data. Everything in the model is a pure function and it uses changesets to validate the data and return what should go into the database. Another layer actually persists the data so the ‘model’ can be used with or without a DB to verify data consistency. It’s hard to explain but it’s very different than Mongoose and Active Record.

The controllers are very much… controllers. One nice thing is they are just functions, they take in a connection and they mutate the connection and then return the connection. By convention all side effects happen in the controller, this leaves everything else to be a pure function, which is only interesting in the debugging case and testing… pure functions are as easy to test as an add(x, y) function.

Views are very different. They could be re-named serialization layer. With JSON they would allow you to munge or omit data to be sent to the client. with HTML they allow you to add helpers using data gathered in the controller (like blaze helpers actually). For HTML views there are ‘templates’ which take on that role.

So far i’ve only used Phoenix as an API so I haven’t used their servside HTML stuff.

Just some noob questions though, does it render React components on the server side as well?

Not that I know of. Though some Python libraries do this somehow. It seems like it would be easier to put a node server in front and have it handle that… fetching data via the Phoenix API. Personally I just bootstrap the data or serve the index page on a CDN.

Can we do GraphQL stuff?

There is GraphQL stuff available. Chris and Jose also talked about experimenting with a ‘GraphQL’ like system for Phoenix that takes the ideas from it but builds a tighter integration. (sort of like Falcor follows the same principals).

How does “real-time” data work if at first glance with the routing and all, it still uses HTTP Requests?

It doesn’t use http but here’s the gist (without a realtime db):

  • Robert enters chatroom:lobby
  • Joe enters chatroom
  • Joe types in Hello Robert and presses send
  • Joes client sends ‘new_msg’ to server with payload
  • server matches a function for 'new_msg' and it sends broadcast("msg", "chatroom:lobby", payload)
  • Robert and Joe’s clients have a JS on('msg', func...) callback which fires and adds the message to the chat window
  • All users in the lobby (both Robert and Joe) see the view refresh in real time
  • Robert types in “Hello Joe” and the cycle repeats

Things to note, there is no latency compensation, this has to be added in on the frontend if wanted, Redux makes this easy. The on callback doesn’t know how to add in the chat, it could just be some jQuery or Redux code there.

This makes for a very very simple yet very powerful system. It works across a cluster of Phoenix servers as they handle that for you.

If you’re using RethinkDB you wouldn’t have to have the server watch for a new_msg coming in from the client, the changefeed would callback and Phoenix would broadcast it out. This makes it more simple and more reliable.

3 Likes