We have a specific use case when live data have to be kept updated for multiple users (hundreds) quite lively (max 1sec delay). The tracker seems like the perfect fit for this but I have some technical question about it: The items that we’ll keep updated are quite big, does tracker only sends the changed data when updating or is resending always the full items from the subscription ?
If, using a library like react-query, I would trigger a refretch on a regular basis (1sec) for each users, would the load be much bigger for the server compare to the tracker use ? (Hundreds users, with 1sec refetch for each, that’s hundreds of fetch request to the server)
Is there any limitation for the number of simultaneous connection on the same publication in term of performance for the tracker ?
[…] The items that we’ll keep updated are quite big, does tracker only sends the changed data when updating or is resending always the full items from the subscription ?
That’s exactly how it works and it’s a so-called mergebox. You can read more in-depth about how it works in the official docs. There are also certain limitations, also listed in the docs.
If, using a library like react-query, I would trigger a refretch on a regular basis (1sec) for each users, would the load be much bigger for the server compare to the tracker use ? […]
That really depends on your use case. As a rule of thumb, publications (reactive data) takes RAM (to store the mergebox). However, the CPU usage scales with the number of changes happening in the database. I’d say the best would be to try both solutions, measure them and then decide - take a look at our article regarding performance monitoring.
Is there any limitation for the number of simultaneous connection on the same publication in term of performance for the tracker ?
No, there are no hard limits. It’s even better to reuse existing publications (subscribe to the same query), as it’s easier on the resources, as the database observer can be shared.
Subscriptions will make the data available on the clients once it’s actually changed. I believe triggering a refetch can only worsen the performance of your system. E.g. if the data doesn’t change as frequently as you’d trigger a refetch, you’d be just wasting CPU, RAM and bandwidth.