@skini26 Well, i’ve always felt that the really easy part of Meteor also comes in parts from the single languages everywhere (it’s not necessarily true, just my feeling). But i definitely agree on the “it just works” feeling with Meteor.
And i would have never thought of switching over nodejs without Meteor on top of it (i’ve only started Meteor a few months ago). So, while having Javascript on both side was something i really liked, it’s clear that the way Meteor implemented this idea on top of nodejs was the trigger for me. And i would just have a hard time going back to multiple languages (but i’d do it if really i needed it, which isn’t the case anyway, has Meteor just keep growing and getting better and better).
Just short reply to the initial post: Meteor is a very great choice for reactivity, but not for real-time! In fact, meteor is very bad for 0-latency, fault tolerant, streaming like applications (i.e. multiplayer-games).
So we talk about two very different business cases. Not alternatives.
I 100% agree with you, Meteor has made Node.js more usable for me, if it wasn’t meteor, I wouldn’t switch. It solves a lot of the things I didn’t like for the backend by adding synchronous style, methods that act like simple routes and publications that makes publishing custom data very easy (I still didn’t find how to achieve the same in other frameworks: publishing user based data and not generic data for everyone).
@dinos TypeScript is a really good alternative, I really like it, unfortunately, I don’t see a lot of people using it and most of the tutorials don’t use it. So I can’t know what are the good ways to structure my Node.js app if you know what I mean.
DART appears to be far superior a language than E2016 or Typescript. Unfortunately from a deficit of trust with Google it appears never to have caught on.
In reality polyglot is superior to isomorphism on every level. You never see a carpenter walk onto a jobsite with only a saw. It’s most efficient to use the proper tool for the job. That’s why the ‘languages known’ section on any competent programmer’s resume generally extends over multiple lines.
All languages have their strengths and weaknesses. With javascript, it’s as if people think using on all fronts the worst tool for most every job is somehow a feature to be praised.
That’s actually quite false. There is no such thing as real-time. Allreactivity is an illusion. What they call ‘optimistic updates’ or whatever with meteor, over in the gamedev domain we call extrapolation and interpolation aka client side prediction.
What’s Phoenix’s story for moving data between client and server? I know you can easily send messages back and forth, but does it have something similar to Meteor where you write to a local cache on the client and Meteor takes care of sending it to the server and then pushing the relevant data to other clients?
Well it’s just a simple well optimized backend that can handle lot of open connections (websockets). Then you write the client side as you like it and handle the subscriptions to channels and the events as you like.
It’s a much more simple approach, which means that you’ll need to write that yourself. If you’re using Redux with some view layer (like React or Backbone), then Redux would listen for incoming events and would merge them into ‘state’. You can implement optimistic updates with that as well.
You can also still use Meteor on the frontend as well like in the example repo I linked to above (see chatty.js i think) and you can just upsert the documents and Blaze consumes it like normal.
Due out December 20th according to github. Appears the biggest part of the release is support for Erlang 18 features while dropping support for Erlang 17.
In Meteor half of must-have stuff is done by community (router, schema, validation, etc), while the core dev team adds new things (can’t actually remember any new at least half a year, babel…?) very slowly and almost does not communicate with community.
As you have used Phoenix, how are this things there?
If you scroll up a bit, @SkinnyGeek1010 discusses how approachable the Elixir and Phoenix core are. They evidently hang out in irc with the peons, answering questions.
Gauging from the commits, they certainly aren’t sleeping on the job either.
Deploying to your own infrastructure requires you to use Exrm which is harder than deploying with mup no matter how you slice it. This gives you the ability to do hot code patching (per module) while running for 0 downtime but for my use cases is way more work than just having a second or less of downtime. http://www.phoenixframework.org/docs/advanced-deployment
The core is quite solid. The router is really nice too.
It uses Ecto for the model and changesets which give you a schema and a way to cast data and validate. If you opt into all of Ecto you can get an ‘orm’ like thing for queries. I prefer the raw driver for Rethink as you can use pipelines to transform the data and i’m just using a thin adapter for model/changesets
I’m getting close to releasing a REST library for Meteor that adds these features to Meteor actually… a very similar server side router, concept of transforming the connection from entry to exit (with controllers), and a separate package to provide the changeset/model in a functional paradigm (not objects or classes).
I think learning about Phoenix is a net positive for Meteor in my opinion. Using just Phoenix strictly will work for some but borrowing ideas can help strengthen the Meteor community
@SkinnyGeek1010 Is it the same principle as the socket.io redis adapter ? used to sync the pub/sub between multiple servers ?
If this is the case, then redis should be faster than Phoenix right ?
And in the other discussion, we just said that the redis solution is not efficient at all (this message and the next one)
Thank you guys for the answers, I think the right thing to do is to spend some time trying to do a setup + example app and see, what does it look and feel like.
With the latests events, considering leaving Meteor and investing time into smth else today sounds not so crazy.
In that video It was socket.io acting as an event bus, not Redis, that was the point of failure. Redis itself and Phoenix both use an observer pattern which will scale considerable higher. Only pertinent information is transferred to a particular channel, not all information.
That being said, you have to consider that Phoenix is highly concurrent while Redis just bypasses concurrency issues by being single threaded. Redis is hand tuned c code and is hella fast. Still, Phoenix is likely to outpace Redis.
As far as I understand it, Redis is only used to keep multiple servers in sync. It’s also only used if you’re running on Heroku because of some kind of restriction on the Erlang VM cluster support.
However, from what I understand you don’t need it if you just have 1 node and scale vertically (since you can use all cores this works out nicely).
Here’s the chart they show redis bing used and other things like RabitMQ or XMPP. I think Redis is just the most user friendly. On Heroku you just click to add and add in the adapter as a package dep. (again from what I understand )
I’ve only deployed to a DO server with a single instance so I haven’t dove into that yet.