ADDENDUM: Need add star, hence can't use Socialise packages!

Please reference my first question here: USE CASE: Email & similar messaging on RabbitMQ/MQTT

My second question is that I would love to use Socialise packages, but need “add star” & “flag message” feature in my app.

I am making a political messaging app. As described in the link.

Doing a custom messaging system seems too much to support just two teeny features.

What is the best way out?? Should I use private collections? Or some thing else?

Rocket.Chat seems more for chats. It doesn’t support more that 50,000 users, & 500 concurrent ones.

Is there an option that supports about 5,000,000 total users? & other features?

Many Thanks in advance.

It sounds like you already like the socialize packages and consider them appropriate for your use case.

So, why don’t you consider adding those “two teeny features” to socialize packages and send a PR to the original maintainer as a contribution?

This way, you would both have solved your problem and helped the community.

1 Like

Take a look at the upcoming Socialize v1 (they are inline with Guide recommendation so should be easier to scale). It should have a bit more stuff up your alley (reddis-oplog is right now the main dev focus).

As for add star, you can use the likeable package and on the front end instead of naming it likes, name it stars. Flagging messages would be a really nice features. Reporting is planned, but right now there are other priorities, so as @serkandurusoy suggested, PRs are encouraged.

With such a high number of users there you will have to deal with scaling, no package (that hasn’t already dealt with such a high scaling) is prepared for that initially. There are many specifics to scalings that you will have to figure out pretty much on your own based on your app (or get someone who already is experienced in that area). I hope to go through it as well with Socialize packages, but it will be a while. If you can identify bottlenecks in the packages feel free to open issues or write on Slack, me, @copleykj (and other contributors) will be more than happy to tackle them.

1 Like

The Socialize packages are made to be pretty extensible. I would definitely recommend waiting for the V1 release as @storyteller suggests. Once out, you should be able to just extend the current message model with the likeable model and then setup the messages collection to return your new message models instead of the old ones. This functionality should only need one very simple modification to the messaging package.

1 Like

@storyteller, @serkandurusoy, @copleykj, Everyone,

I think we should tackle scalability first. It’s most important for socialize/social kind of packages. The mere mention of social-network is synonymous with scale.

It’s just my view. I do understand that you have been into meteor for long but, this is a point worth considering. I know it’s not easy - but once we master the complexity of the problem, things will get better from there. We should have a heavy duty discussion of this on the forum.

We have a great, great forum. Let’s tackle scalabilty once and for all. Let’s have a generic discussion on scalabilty for everything (if still required after Galaxy & mLab). & then see where we have to go from there.

Someone from MDG should open a thread along with some package owners.

I am more into using the best option. I will happily use the most appropriate option there is.

WHY SHOULD scaling be a problem if Galaxy & mLab exist. Please enlighten us more.

WHAT R your nice works on Reddis-Oplog? What is in the offing in v1?? Is there a way to reference this online by default. I am sorry I am a little new with Gitbhub.

SCALE IS SYNONYMOUS WITH '2018. IT’S THE FIRST THING THERE IS IN '2018. IT WITH LEAD US FURTHER TO AI & ML. LET’S HIT THE HORIZON. IT’S NOT EASY - BUT MOST WORTH TRYING.

I have been in the industry for 18 yrs. In software engg. for 12 yrs. Hope this helps.

I’m not sure I understand what you are getting at, but my selection of words probably was off (scale vs performance). Horizontal scalability is fine with Galaxy and Atlas (or mLab), probably the easiest it ever was. The only issue with packages could be performance at scale if you want have a sensible number of users per container.

The numbers you got from Rocket.chat is for their cloud service. If you deploy on your own it really depends on what kind and how many containers you put to the task with other caching services, etc… If they have 500 concurrent active users on one container that is pretty good I think (probably will be 3 for safety). Could that be improved? I would like to think so, so that the costs aren’t astronomical.

From Socialize packages performance right now should be pretty decent, mainly because you have to write the publications/method yourself and are able to offload a lot to the client, if you so choose to. This is where the biggest bottlenecks are and since you have to complete control over them it allows you to debug them and optimize them. Then we have things like Apollo and redis-oplog, in general, which can greately enhance performance as well.
What v1 of socialize packages brings is better extensibility, compliance with the Meteor Guide when it comes to creating collections and upgrade in dependencies (mainly for simple-schema). @copleykj can add the details that I missed (like support for reddis-oplog which I don’t need at the moment).

I hope that cleared out some misunderstandings.

1 Like

Many, many, many, thanks.

Can’t thank u guyz enough.

Sorry for the mix up.

One thing to add may be that socialize packages (afaik) work through pub/sub on top of collections, which means every message from a user first hits the database before it can be delivered to the corrresponding party.

This design would become the main bottleneck against scaling into millions of users.

As every app or use case is unique, the case for a very highly scalable/concurrent (chat) messaging system requires more moving parts and a specific infrastructure. There are even platform as a service providers who handle such intensive backend requirements.

I’ve recently spotted this article (mind you, this is a paas provider specializing in high velocity streaming data) which might provide you more insight.

I should also add that it would be very very very unfair to expect such kind of specialized scalability from a “package”, and even at that, I think @copleykj and team are doing a fine job pursuing “best possible” developer experience out of box with many features (including scalability and performance) that can take you from 0 to 100 km/h with minimum effort. Now, if what you’re looking for is a car to race with at Formula 1, you should understand that a Formula 1 car is not something you can get for free or even purchase, you have to build one yourself or get the resources to build it.

Yet again, what you might be needing in reality might be a Nascar car, which the socialize packages might help with building, which, again, is an amazing thing to have within reach in an open source ecosystem!

There is a bit old article about scaling Meteor which I just dug up:

Well, due to scaling requirements I highly encourage you to look forward WebRTC.

I guess there are a couple things I’d like to touch on here. If you need a history of messages, then you are always going to have the overhead of the DB. And truly MongoDB is blazing fast, it’s probably not going to be your bottleneck, at least not for quite awhile. Currently in Meteor your bottleneck is going to be livedata, end of story. Oplog tailing and mergebox will kill your performance for apps that constantly add/remove/change data. This is exactly what redis-oplog is solving. Right now it does a fantastic job of this and it’s only going to get better in the future with developments to redis-oplog such as collection-cacher.

Once redis-oplog is integrated into the Socialize packages it will be completely optional, and yet also completely plug and play. You just run meteor add cultofcoders:redis-oplog disable-oplog, add your redis settings to settings.json and then start meteor with the new settings and your ready to go. All the fine-tuning is done, publications are written. All that is really left up to you is to add the interface in your chosen view layer. I think this is fairly exciting, and hopefully others do as well.

2 Likes

Hey @copleykj I’m afraid you’ve misread my reply where I said:

every message from a user first hits the database before it can be delivered to the corrresponding party

(emphasis added now)

By the design employed with your (amazing) packages, regardless of redis-oplog, persistence becomes the core of delivery whereas one might choose to implement persistence as a feature on top of a different core which makes use of p2p, a cache, key value store, a different transport or at least a masterless database (unlike mongodb) where mongodb can still be employed as cold storage for message retention.

Furthermore, the reason might not be scalability alone, but things like compliance, security, privacy etc as well. In fact, a central message history might especially be undesired or desired only temporarily.

Again, every app’s use case may be unique, especially when scale and a domain like politics are of concern and all I’m merely suggesting is expecting a set of generic - however extensible and performant they may be - packages to address such requirements may not be the best engineering approach.

Finally, as some of the oldest and most pationate members of the Meteor community, I believe we have some added responsibility to be open and fair equally about where Meteor (both core and ecosystem) excels and where it may not be the best fit, or to what extent/subset it can address certain problems.

@serkandurusoy @copleykj @storyteller @dr.dimitru, Everyone,

I think there was a mistake made in the answer to my First question linked above in my question. There was a talk of fewer than 10k users, & hence a suggestion to use Collections. The total users might run into millions “some day.” & hence there is a suggestion for p2p, key-value store, WebRTC, etc.

I am thankful to all answers.

I want to specifically ask: How do WebRTC, RabbitMQ (& likes), Redis, & others compare?? I know WebRTC requires signalling.

I think if we only have textual communication, may be WebRTC is not required?

Redis is good for light/mid duty messaging, RabbitMQ for mid/heavy-duty?

& we certainly have nothing lacking wrt great chat spectrum of services cited above (great article). So all options are open.

How do WebRTC & Rabbit compare?

Many, many, many thanks in advance. Just can’t thank you enough!!

I personally cannot speak on how well any of these will work at the scale of millions of users cause I’ve never built anything that had millions of users and I’ll think you’ll find that to be pretty normal. Get it off the ground, get some users and pray to the gods you are successful enough to run into scaling issues… Once you’ve got some decent scaling issues, you’re very likely to have decent revenue to pay people to deal with them, and that’s a very good thing because you are going to hit scaling issues at many many points in your journey to 1M users.

3 Likes