Is it possible to serve Blaze content for a specific location via reverse proxy?

Hi, my company’s Meteor project is currently using Blaze templates + Iron Router on the front end. Is it possible to generate and return the Blaze content (with reactive data) to an nginx reverse proxy request at a specific location?

I’ve also asked a similar question at Stack Overflow, but I’m posting here as well since it’s a very Meteor-specific problem.

I’ve read other topics on this subject that suggest that Blaze cannot be used with the server-render package, but I’m wondering if I’m missing any Blaze feature that can accomplish what we need (without necessarily using server side rendering).

Here’s our general setup:

  • One Meteor portal app that manages authentication and upstream server requests
  • One or more Meteor apps that hold game servers with separate game sessions in their local databases
  • Users log in at the portal. When they attempt to access different game sessions, the portal app builds the appropriate request with HTTP headers for that session, then makes a location-specific request to nginx, which forwards the request to a specific location on the correct upstream server based on the key info in the headers.

So far, I’ve managed to get nginx to capture the correct requests and forward the requests via reverse proxy to the specified location on the correct upstream servers. I can even get the response back. However, the response does not contain the content that I want. I don’t know what root folder to specify, since I’m not aware of Blaze actually generating any static html content at a specific location that can be updated appropriately later. I’ve also tried using IronRouter to respond to HTTP requests, but I was not able to make the upstream server recognize the get/post request and invoke the defined IronRouter handler.

Here is my example nginx setup, where localhost:3000 is the portal app while localhost:5000 and localhost:6000 are example upstream game servers. The portal app will be the main entry point. The game servers will be on completely different networks and containers. Please imagine that localhost:5000 and localhost:6000 are different addresses on the internet. (I’m showing them as localhost because that’s how I’m testing at the moment; I fired up separate Meteor apps in dev mode on the ports locally.)

events {}

http {
  upstream meteor {
    server 127.0.0.1:3000 max_fails=2 fail_timeout=2s;
    check interval=1000 rise=2 fall=2 timeout=1000 type=http;
  }

  upstream game1 {
    server 127.0.0.1:5000 max_fails=2 fail_timeout=2s;
    check interval=1000 rise=2 fall=2 timeout=1000 type=http;
  }

  upstream game2 {
    server 127.0.0.1:6000 max_fails=2 fail_timeout=2s;
    check interval=1000 rise=2 fall=2 timeout=1000 type=http;
  }

  map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
  }

  # Map to different upstream backends based on header
  # Format is header_value "upstream_server_name"
  map $http_x_game_server $game_server {
    default "meteor";
    game1 "game1";
    game2 "game2";
  }

  server {
    listen 80 default_server;
    server_name localhost:3000;

    try_files $uri/index.html $uri 404;

    location / {
      proxy_pass http://meteor;

      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_redirect off;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;
      proxy_max_temp_file_size 0;
    }

    location /revprox/instsession/ {
      proxy_pass $scheme://$game_server/insts/instructor/$http_x_session_id;

      proxy_set_header Host $host;
      proxy_set_header Proxy '';
      proxy_set_header Referer $http_referer;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header X-Original-Request $request_uri;
      proxy_set_header X-Game-Server $game_server;
      proxy_redirect off;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;
      proxy_max_temp_file_size 0;
    }
  }
}

Is there a way to accomplish what we need with Blaze + Iron Router and some nginx config modification? Do we need to switch to a server side rendering solution with long polling? (Are there any alternate solutions?)

Thanks in advance for any advice.

This works for me. I have several apps running behind nginx. You could also look into Passenger.

server {
    server_name meteor.example.com;
    # set nginx path to the public directory to possibly serve static files via nginx
    root <path to meteor app>/public;

    access_log  logs/meteor.access.log  main;
    error_log logs/meteor.error.log error;
    #
    # meteor is running on port 5000
    # 
    location / {
        proxy_pass http://127.0.0.1:5000;
        proxy_http_version 1.1;
        proxy_read_timeout 36000s;
        proxy_send_timeout 36000s;
        proxy_set_header Upgrade $http_upgrade; # allow websockets
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header X-Forwarded-For $remote_addr; # preserve client IP
        proxy_set_header Host $host;
        #
        # this setting allows the browser to cache the application in a way compatible with Meteor
        # on every applicaiton update the name of CSS and JS file is different, so they can be cache infinitely (here: 30 days)
        # the root path (/) MUST NOT be cached
        if ($uri != '/') {
            expires 30d;
        }
    }
}

Are you expecting the proxied server to return fully rendered HTML? Do you have SSR setup?

Thanks jamgold, that setup works great for the initial reverse proxy (to the portal app) using the root location. When I try to forward a request from the portal app to the upstream game servers with a specific location directive, however, (/revprox/instsession in my example), I’m having trouble getting the correct rendered content back from the second layer.

Hi znewsham , yes, ideally, we want the upstream game server to render the HTML and return the result. This is because the upstream game server has the relevant local data, and we want to isolate most of the load to the upstream game server so that we can scale horizontally.

Currently, we do not have SSR set up; is this the only possible way to send the right content back in the response from upstream? If so, based on what I’ve read, we need to get rid of Blaze / IronRouter and go towards a static html and server rendering solution. Perhaps we might need to switch to React with SSR. (Please let me know if I’m mistaken.)

[EDIT]: I think I was misleading in my original post. The localhost entries that I’m testing with are supposed to represent different servers. The game servers will each be in completely different containers + networks, which the main nginx will route to. I’ll go fix the main post, sorry about that.

Thank you both!

It’s possible I’m still not understanding correctly what you’re hoping to accomplish, there are two potential ways I could envision this being done,

  1. Pulling a blaze template from an external server, and having the client render it - as far as I know this is not possible with Meteor directly, however it might be possible if you use an NPM version of Blaze to return the JS for just that template and put it into a script tag.

  2. SSR with blaze is possible, though if you want reactivity on the server, there are some gotchas. In general, you add a webhook to a specific URL path, e.g., /profile/{userId}, then the response from that webhook is the rendered component. However, your template MUST render synchronously, otherwise the webhook will return HTML before the template renders.

Which way are you hoping to do this?

Hi, I think what we’re looking for is closer to option 2).

We have a portal where users log in and can view a summary of their various game sessions, which may be housed on different game servers. When a user tries to access a game session, we want to send a request from the portal, which nginx then forwards via reverse proxy to the correct game server. The game server needs to be in control of rendering (with reactivity). The portal app does not have access to the Blaze templates that the game server apps will have.

I’m a bit confused on how to set up Blaze with SSR; I read some posts that the server-render package is compatible with static html packages but not with Blaze, though admittedly those posts were two years old. Is there a way with Blaze now that I’m not understanding from the documentation?

Currently, I have set up what I believe to be a webhook on the game server, and I’m able to get a test response back, but I don’t know how to get the actual content from Blaze. In the docs, Blaze.render seems to be something client-side. (I could be misunderstanding; here’s the sample code, apologies for the weird syntax since it’s in Coffeescript.)

Meteor.startup ->
  WebApp.connectHandlers.use('/insts/instructor', (req, res, next) ->
    # I have confirmed that the request headers contain the correctly forwarded values.
    # Ideally, I should be getting some rendered content here and putting it into the response.
    res.writeHead 200
    res.end "this is a test response"
  );

Do you have any specific documentation or names of functions / features that you recommend me reading up on?

Thank you!

I don’t think you can get fully what you want, it’s relatively easy to setup server side reactivity, I use these two packages:

peerlibrary:server-autorun
peerlibrary:reactive-mongo

However, I also use modified versions of

kadira:flow-router
kadira:blaze-layout

and a custom version of blaze (which I published to NPM), so I’m not 100% sure it would work without all of them combined.

This get’s me (though I don’t actually need it) reactivity on the server, where I can render and cache blaze templtaes for specific URL’s, for example /article/whateverrenders the “article” template with the “whatever” data. If the “whatever” data changes, the cache updates. However, the data won’t be sent to the client until they request the page again. This works very nicely to reduce the startup time, and pre-rendering. But not for reactively sending the UI to the client.

In theory, a publication could do this - where the data being published is the entire HTML of the template.

If you don’t mind me saying, your application structure seems a little odd - a meteor portal that connects to a meteor server which delivers fully rendered reactive HTML (directly?) to the client seems a little convoluted.

You probably want something like Blaze.toHtmlWithData

Makes sense, thanks for the help! We can work on shifting to server side rendering.

We’re switching to this new application structure for the purpose of affordable horizontal scaling and reducing latency between app and db as well as app and client. (We have people who want to play across the world.) Game sessions are played on the game servers, which each have their own local db for the live session data. We need to support some pretty heavy calculations with many concurrent users (large college lecture classes), so we need to split up the load between multiple game servers. However, each live session’s data needs to be confined to the same server on the local MongoDB, so users need to all be directed to the correct game server. We prefer a reverse proxy solution instead of redirects + cross server storage for login token, because we don’t want the different subdomains to be visible to the user. Since we’re going for reverse proxying, I need to solve the rendering problem.

Thanks so much for the help!

Interesting, if your concern is latency, and specifically locality, I’d suggest a slightly different approach, if both your portal and game servers are accessible from the client, you could create a DDP connection from the portal client to the game server, then render the templates on the client. The client wouldn’t see a redirect, but they’d be connected to the game server. You could then request the game servers templates via dynamic imports. Might be easier and cleaner than what you’re trying.

Hmm, this sounds like an interesting potential approach. Let me check if I’m understanding correctly: Users would connect to the portal app, and if they request a game session, I’d open a DDP connection to the correct game server at that point. This would be a direct connection over which I can send requests, and the game server would run all its calculation logic and update the data locally. I could request the game server templates, which could be rendered on the client. The data from the game server would come over the DDP connection and be used for rendering.

Would I need to do something extra to get reactive updates to work (or perhaps move away from relying on reactivity)? We have identified two main bottlenecks: one is the CPU of the game session data MongoDB, and one is the CPU of the client-facing server due to massive amounts of subscriptions and observers. We need the game server to take the main load updating / preparing the data for the user, while each local MongoDB needs to take the load of the reads / writes and other operations for the live sessions. If we have to subscribe to the game server’s db from the portal server, I think we might have some serious performance issues. You seem to have a better idea of how the subscriptions and cross server reactivity works, could you please help point me in the right direction?

Apologies if I’m fundamentally misunderstanding something.

Thank you!

You’ve basically got it - you’d send data over DDP, and you’d have to load the templates manually from HTTP fetch requests (that you’d eval, or dump into a script tag), in gegards to reactivity, so long as you define the collections to use the DDP connection that points to the game server, everything else would work just as normal. I’ve used this approach to shunt some of our heavy stats work onto a dedicated meteor server.

I really would NOT do cross server reactivity. It’s possible, but I can think of very few use cases where I would pick that over client side reactivity. Doing it this way, your portal server and game session servers would not need to talk to each other at all.

None of this will help with your performance issues though, if your game server is already baring the brunt of the performance costs, it will continue to do so, it might be worth looking at the code it runs, and try to optimize it, before you do any of this. The CPU load of the mongo server is particularly suspicious, unless you’re dealing with thousands of unique queries per second, I’d suggest looking at your indexes, and/or caching of data.

TBH - this is too broad a topic to get into every detail of it in a forum post, at least without significantly more details about which queries are slow, what they are used for, how often they are called, and by whom, how many concurrent sessions per portal/per game server, how widely distributed the game servers are, and how much cross over there is between game servers (e.g., is it strictly locality based).

Ahh, so the collections would need to use the correct DDP connection. I’m under the impression that Meteor prefers the settings for the collection to be static - am I mistaken? There is a possibility that (example) 1 single user may have 10 different game sessions, which are distributed across 3 different game servers. Maybe the person would try to view game sessions on different game servers at the same time. Each game server would need a different connection, but we don’t necessarily open the connection until they attempt to access a specific session.

You’re right that the game servers will continue to bear most of the performance costs; we’re hoping that horizontal scaling will at least help to reduce the load on any one game server.

We discovered issue with the CPU load of the MongoDB when we experienced a high amount of load for our small system (a class of 280 people all playing at once). The main pain point is the massive amount of writes all happening at the same time. Our games are turn-based with an enforced time limit, so there are stages where a lot of students might be causing state updates to happen all within a short time of each other. (Especially if the timeout event is triggered and automatic decisions are made on their behalf.) We also send pings to the server from each client every 3 seconds, and it’s not uncommon for a person to have more than one tab open.

The performance spike tends to be an issue if too many requests happen at the same time. While any one query might not show an alarming response time (such as 200+ ms), there are a lot of requests. We’ve reduced the problem by adding in-memory caching, limiting the writes to when they’re absolutely necessary, and limiting the fields updated to the relevant ones. When the requests are below the maximum threshold, things work fine, but the closer the load is to the threshold, the more exponentially worse the performance becomes. (Perhaps not surprising.)

It’s worth noting that we can’t afford super powerful vertically-scaled hardware. While the scale of our requests does not match large enterprise-level situations, we have smaller stuff to work with. (Example: We default to 1 ECU for the game server and we used the M10 cluster tier in MongoDB Atlas for our production instance. The remote cluster on MongoDB Atlas will become less relevant when we switch live session data to use the MongoDBs that are local to each game server. (User details and some static info would become the only stuff that lives on the remote MongoDB Atlas cluster.) Each game server is going to be independent of the others and the data there will be periodically synced back to the main remote cluster to be persisted.