Efficient way to build a social feed based on multiple metrics?

Hi there, we’re running a Metaverse app on Meteor that already includes quite a number of social media features, including feeds. Since the app is location-based, we have different feeds, one is showing nearby content, the other is showing content from your friends.

On the backend, we have separate Mongo queries for the different types, because the queries differ a lot (because of the geospatial aspect of the “nearby feed”).

We now would like to consolidate this to a unified feed, as peeps know it from traditional socila media. This feed shall prioritize content from your friends and then insert local content as a second priority. We might need other prioritisation schemes later as well.

My question is: Is there anybody out there that has already implemented such a social feed based on multiple metrics? The naive way to put “friends” content first and then add the nearby content won’t work, because the feed also has to support endless scrolling. So I would at least have to track which parts of the separate content streams were inserted where. Or is there a way to detect on client side which content has just recently added to which publication?

I would also appreciate any hints to tutorials / blog posts about feed implementations in general, if someone knows some. Thx.

Ok, let’t talk about this :).

This is a timeline concept, meaning, new cards are at the top of your feed. There are other concepts where the newest cards (generated after you started scrolling) get inserted in the next page of the scroll.

Your feed would be having a certain number of visible cards and a number of cards to pull (you show less than you pull).
For example, in LinkedIn you have almost 2 visible cards but you could have 1 or 20, depending of course on what you want to display. You may also have different layouts for mobile and desktop. In general you pull less for mobile.

Question: if, let’s say, you pull 10 to display 6, out of 10 how many would be friends and how many would be local? Would it be ok to pull 7 followed by a second query to pull another 3 and use two separate counters for skip that will be used by all queries?

You would then do something like:

  1. Pull friends cards with limit 7, skip 0. Followed by 2.
  2. Pull location cards with limit 3, skip 0.
  3. On viewing the bottom of the feed, pull another friends, limit 7 skip 7. Followed by 4.
  4. Pull location cards with limit 3, skip 3.

Scrolling feeds I only see it working with methods and a local state on the client. Makes no sense to run it with publications.

For new content, you can check twitter, I think they do it best. You can keep an observer/publication, on the count of newer cards. Once you get any, notify the user and let them scroll up to the top of a feed that contains the new cards.

Twitter notification for new content:

1 Like

Thanks a lot for your fast response, highly appreciated.

I thought about implementing this in a similar way, but there’s a couple of challenges with this approach.

If I understand you correctly, you would keep track of additions separatly (for both feeds) and maintain an “addition” offset to be applied to the limits and skips.

Let’s take the friends feed first. You are currently at your step 3, i.e. limit 7 skip 7. Now 2 new items come in. I would add this to the “offset”, so my query would now be limit 9, skip 9. Plus an indicator to the new users that they can scroll to the top to reveal two new items. If I would use pubs for the feed itself, this might cause flash of content, so I guess that’s why you’re saying that methods should be used.

That seems to be doable for the “friends” feed, because the content is always inserted in a linear fashion, i.e. new content always comes first.

However, in the nearby scenario, new content can come in at any position in the list. So, the updated limit 9 / skip 9 window might include items that were not part of the previous content. Or, if the user scrolls down, the next chunk might include them.

So I would at least have to get these added items explicitly, not just the number of them. Pretty much like the low-level server-side publication API, where you get events for added or removed items. But even if I get that information, how would I adjust the method calls accordingly?

It gets even trickier if content is being deleted. How would you know how to adjust the offset then? You would have to know if the deleted content is part of the “fetch on top” set or somewhere in the feed.

The “add” scenario might be solved by also keeping track of the time the user views the feed for the first time, so newer items are excluded based of their createdAt time stamp. Then, if the user scrolls to the top, only these items are retrieved, prioritized and added to the top. And the “feed time stamp” updated.

“Now 2 new items come in.”. You never pull those unless you “refresh” the feed. Try to experiment with Twitter or if you don’t use it, I can do a screen record for you. “Going at the top” is actually a refresh of the feed / re-pull from the top.
You can use the time in your query. Let’s say you keep in your client state values such as limit, skip, and date (of the start of your feed). Deleted items you mark for deletion but not remove. You filter them out server side before sending to the client.

     $lt: date,
     skip: // ....

Update the date every time you refresh/pull from the top.

await FeedCollection.removeAsync(_id)

you do

await FeedCollection.updateAsync(_id, {
    $set: {
         deleted: true

And you delete these posts with a routine at a convenient time such as 3 AM local time if your app spreads over only a couple of time zones.

You would be using virtualization which means you may have hundreds of “posts” in your feed but you only show a couple. Scrolling up is done with data from your local browser. If you use React, you will probably have to use this: https://virtuoso.dev/. If you are going this way, I can help you with code.
In principle, scrolling up is automated, if you have data in your store, it will be displayed. Scrolling down beyond the bottom, triggers the pull of new data. You have an offset in pixels that may trigger a pull of data so that you can pull early and when you are at the bottom, data already exists.

For sure I would go with a separate collection or a cache or combined for the feed items as you can pre generate those. Then you can have workers update the feed collection based on the user profile (like location, interests etc).

We tried the live generation of feeds multiple times but the complexity makes it hard when you expand the profile.

For example why this is easier:

You set the current location in the user profile. Then an automated worker can run which has the input (location) and the user id. It starts adding to the feed.

Your process for showing the feed is totally unaware. Which keeps it simple.

Now consider 10 workers and you will get why this keeps things more simple.

When you run into too much content for a user (so you want to prioritize) you can add a score for the content relevancy which allows you to sort on that instead of just time. The relevancy can also be calculated by a worker with for example: likes of related items * not seen before * location * date. You can make that algorithm as complex as you want.

As stated above if you want to keep the feed items collection small you can auto delete older items. But if you can keep it (maybe later in a different store) because the data like amount of time spent on watching the item, likes etc can be relevant in the future.

1 Like

Interesting approach, thanks for sharing your thoughts. Our location is updated in real-time, though, as the user moves around. So it might be a bit tricky to use a background worker here.

That actually doesn’t matter, you can just update the feed based on the new information. It is the same as someone engaging with content (watches video from a cat for 10 minutes), from that moment you can give content about the same subject more priority. You can increase the ranking of cat video’s from that moment.