Here is your CMS on top of Erlang. Enjoy!
yeah proves my point I guess
True enough. Though perhaps you missed the Whatsapp clone tutorial on the meteor blog.
Haha, that’s a pretty bad UI Back end designers.
I don’t want to beat a dead horse but the main point of Erlang/Phoenix is that backend. I’m sure someone will come out with a CMS with a nice front-end. Meteor is a very nice full stack solution.
Speaking of bad production value this is a great video:
https://www.youtube.com/watch?v=xrIjfIjssLE
Haha, yes, as Erlang unleashes all its concurrency power to the content management world.
However, with no means to kill the discussion, but let’s get back to topic. I don’t try to play the blind meteor advocate here; only to point out that Meteor - of course - does not make sense for many of the applications at scale for which Erlang, and there with Phoenix, was designed for. Meteor as a phoenix alternative? Come on
Looking at business cases for both frameworks, the title “Phönix as a Meteor alternative” becomes narrow, since - there should be - very little overlap. One really must have a personal relationship to the DevOps Gods if they plan to build highly concurrent systems with Meteor or way too much budget if they instrument Erlang to power their steady state website content.
Meteor is the microwave of kitchen equipment here.
I agree to some extent most people will never need to support millions of users but we could all use a more efficient tool. Also important, it may be better to pay more for hosting to get easier realtime (meteor).
However, i’m afraid that there is a lot of overlap as far as scale. You don’t have to have 50k concurrent users to need Phoenix… a lot of Meteor apps i’ve heard of can only have less than 50 concurrent users before needing additional boxes.
The best i’ve done was a little over a 150 concurrent users per modulus servo (512mb) for a particular app (tuned publications and limited reactivity). For the day a peak was 400 so not massive scale but I needed 4 servers to stay afloat. At $60 a month this isn’t a big deal but scaling larger would be costly.
I mean really they are apples and oranges and hard to compare side by side but if you would have the same app on both systems you can use less resources. I guess that’s the point i’m trying to make is that Phoesnix isn’t just for mega scale, it’s aimed at normal apps but it happens to do it fairly efficiently.
Benchmarking a REST Phoenix API on a 2GB 2-core digital ocean yielded around 58k hits in 1 min or roughly 1000 hits per second before getting errors (23). Max latency was 16ms and best was 14ms. using Blitz.io.
That’s plenty for me needs and offers lots of headroom without needing 4-6 node boxes running in a cluster.
Really for me the most interesting thing is how they handle (soft) realtime data (quite differently). Also not really discussed but the ecosystem is very interesting as well.... there seems to be a high priority on quality and testing which is pretty neat.
I’m most interested in bringing in some things that could help Meteor, whether its the language or framework(s) (though they’re different beasts of course). I’m still developing on Meteor for most projects but high traffic sites may get Phoenix until MDG provides a more efficient solution (which sounds like its in the works).
@SkinnyGeek1010 When do you think you will release your Rest API package for Meteor ? And is it solid, production ready? Could I compare it to an Express Rest API ? Does it scale the same way if I only use the Rest API without any DDP ?
Within this week. working on the Meteor-Elixir repo above took more time than I estimated.
The basic router/controller/connection-functions will be released first and is 90% of it, then the model/changesets will come next (models are done).
Does it scale the same way if I only use the Rest API without any DDP ?
Yep! It scales the same as an express app so you can just horizontally scale without any issues.
Could I compare it to an Express Rest API ?
Very much so. It’s running the connect-router under the hood (which could be swapped easily). It’s basically like an express with ES6 and a functional mindset. There’s a conn
data structure that gets passes around and transformed but under the hood you can just modify conn.req
and conn.res
directly if there isn’t a function that’s included. The conn
eventually gets returned in the controller and rendered into JSON/XML/etc…
Can’t wait to start back up on it I’ll have some basic benchmarks on DO, Heroku and Modulus to see what the differences are in hosting (and perhaps Galaxy ) .
Wow ! Honestly, after this, there will be no reason to use another node framework. This is awesome !!!
So it will support any Connect/Express middleware ? What about auth ?
Thanks a lot for your work, this will make Meteor way more useful for a lot of people.
Is there a way to build a desktop app using the same phoenix components (Blaze, React, Angular, whatever) used for a meteor app?
With Meteor, it’s possible to use Electron for this purpose… but with Phoenix?
While you could likely install Phoenix on osX, a highly concurrent but single user desktop app makes little sense. It would be better to use Electron or Node-Webkit for that purpose.
When you consider the latency under load, those numbers are very impressive. I assume that was with stock Elixir Postgres behind the REST endpoint?
@SkinnyGeek1010 so with your Meteor - Elixir combo are you predicting a much higher simultaneous user count? I’m prepping an app and although I don’t anticipate a lot of users, I’d rather be prepared and have a solid back-end that can handle the load (just in case).
I’d rather not migrate to Phoenix (but I will if I have to!) completely so if this Meteor / Elixir solution works almost as well as a Phoenix deployment then I’ll be one happy chap.
Gee, thanks! It fully supports Connect/Express middleware, no munging required. I originally thought about keeping the conn
data structure for the middleware but that just adds unneeded wrapper packages. So now you can just:
const proxy = Restful.require('proxy-middleware');
applyMiddleware(
acceptJson,
proxy,
rejectThing({foo: true})
)
Considering keeping the original transformConnection
function name but applyMiddleware
is more obvious (I wanted to really emphasis the functional style of just transforming data).
It includes HTTP basic auth that will hook directly into the Meteor password accounts, as well as a generic basic auth. For the rest i’ll be leaning on PassportJS for other authentication via middleware. Eventually it’ll have a generator to setup your own basic OAuth server… for now just a snippet in the guides will do.
It’s just a backend. You can consider it a direct competitor to Sails.JS. So you can still package up your frontend the normal way and then connect to the Phoenix API.
[quote="jacobin, post:72, topic:13519"] When you consider the latency under load, those numbers are very impressive. I assume that was with stock Elixir Postgres behind the REST endpoint? [/quote]
Indeed! They actually measure latency in micro-seconds in development
This test didn’t have a db connection because it wouldn’t’ execute the JS (was using Mongo). However I did another test with just a static JSON response and it was slightly better but pretty much the same.
In theory if you offload the Meteor html (view source version) to a CDN and follow Josh’s upcoming blog post, you could take a ton of load off the meteor server and then in my example at least (no auth/sep rooms), the bottleneck would be Phoenix. However, in real life the Meteor app would be handling the DDP connections and user publication.
… I’d rather be prepared and have a solid back-end that can handle the load (just in case).
I’d rather not migrate to Phoenix (but I will if I have to!) completely so if this Meteor / Elixir solution works almost as well as a Phoenix deployment then I’ll be one happy chap.
Honestly I would just stick to Meteor unless you know you’ll have lots of traffic. The repo is meant to be more of an escape hatch for users who are busting at the seams with oplog. I’m not there yet but a change in plans meant that i’ll be receiving two orders of magnitude more traffic than anticipated.
Tech. you could use it in a brand new app by using Meteor to authenticate and handle the user publication, then do all other realtime data through Phoenix, but then you lose the ability to use packages that use your collections (like TabularTables or something).
Also I enjoy learning new programming things in my spare time so it’s not too much extra effort for me.
Okay, the setup was pretty easy, webpack and react integration went pretty smooth. What about SSR?
The options I see:
- Prerender or similar service (isn’t that hard and is free if you don’t have many pages)
- Make all templates in eex and give them to bots (duplicate all logic and templates, kinda sux)
- Use javascript interpreter to serve js files (might be an option, but is there an interpreter for elixir/erlang?)
Any ideas how to make ssr work with Phoenix?
The con with fully SSR (serverside rendering) is that it’s a huge CPU sink. You’ll need to have way more servers to do this. You also have to now deal with not allowing the users to click buttons until the JS is parsed and ready.
I really don’t think fully SSR React pages will be that important for most apps. For a lot of apps the ‘app’ part is behind a login that the google crawler can’t get to. Public pages could just be eex
pages and the ‘app’ page could either be a CDN url redirect or an empty page.
Taking it up an notch you can have the empty eex page bootstrap data into a global variable so that the only download wait time is downloading and parsing JS. This is what a lot of site do.
Finally you could do something like what FB does which is in between… send bootstrapped data but also have a very bare bones page. You’ll notice feed posts have a placeholder icon until React is loaded.
Lastly, you could have Node serve the pages and then connect to the Phoenix server on startup. You would just change the connect('/')
to connect('api.you.com')
.
Also worth mentioning… if you’re only gaining 500ms by doing SSR it might be offset by Elixir’s low latency… if you’re serving the page in 15ms then there’s extra time for the assets to download.
Channels are not a replacement for Meteor’s reactivity by any stretch of the meaning. If you want to have the same power of livequeries in Phoenix, you’d have to write a lot of code yourself and it might not perform that much better than Meteor. It all comes down to the requirements of your application, but if you heavily rely on Meteors livequery system, then phoenix framework isn’t an alternative option.
That being said, Meteor’s performance with larger numbers of concurrent users is really an issue that needs more attention.
Yep, agreed! It’s much more low level. If one could use RethinkDB then a channel could subscribe to a changefeed and then the channel would broadcast changes to any clients subscribed (outdated example).
I’ve found replicating reactivity in Phoenix to be quite simple actually. The way channels are set up is perfect for interacting with something like Redux to manage your frontend. When someone goes to a particular page the React component grabs the current state of the application from a channel or REST endpoint. If something changes on the data layer Phoenix broadcasts the change through a channel to Redux which then updates your React component. With Redux you get optimistic UI for free. In addition, this works with any database, not just Mongo.
That’d be pretty cool!