Google Indexing Posts, SSR


#1

Hey All,

So i’m trying to determine the best way to handle allowing google to index posts/articles made on my meteor sites. I use SSR now for meta information to allow sharing on social media, and I used a SSR site map to get the pages recognized, but now I need to SSR the pages to get them to be indexed.

Question is, how can I SSR the pages without messing up my non SSR side?

I wanted to see if anyone has already solved this problem before I did my own approach, right now I am leaning towards just SSR all article posts completely… but that will require me to not have those routes client side? I’m not sure how that will effect the functionality.

Alternatively can this be done without going SSR?

EDIT(as I’m researching):

Should I just switch to Iron Router? It seems if I load the data first in iron router and then send it perhaps it will work? Or can I render the blaze templates with Iron Router server side?


#2

Unfortunately I’m not sure how this is solved for Blaze… In React it’s as simple as using the react-helmet package and then using the <Helmet> component within each page level component and placing the new title and meta tags inside.

I haven’t used Iron Router in years so I’m not up to date on if there is a way to accomplish SSR with it. If not though I wouldn’t recommend the switch.


#3

Currently I have a Meteor+Blaze online educational project. We are using CSR for all the website (No SSR) and for google indexing and social media sharing we use prerender. It is a great alternative for all the SEO of your website. Prerender caches all the pages of your website so when google checks any url of your site, it gets the page as it should with SSR with no extra effort or load to your server CPU).


#4

Hey, Yes I was reading about this. Are you doing it on your own server? I was confused however, how does it know if it is being accessed by a bot or a user? I’m assuming you can adjust the times that it updates and such?


#5

So I ended up doing a prerender through NGINX, All you need to do is register at prerender.io to get a token then reconfix your nginx file as stated here.

Official prerender.io nginx.conf for nginx

Things to note:
in the if($prerender =0){

replace the inner with where you want it to point if its not going to get a cached page. Mine looks like:

    if ($prerender = 0) {
        proxy_pass http://localhost:3005;
    }

All of my other initial headers I put before the try files command:


  location / {   
    
    proxy_read_timeout  90;

    proxy_http_version 1.1;  # This is important to prevent websocket error
    # don't cache it
    proxy_no_cache 1;
    # even if cached, don't try to use it
    proxy_cache_bypass 1;

    #set headers
    add_header Cache-Control no-cache;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto $scheme;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    try_files $uri @prerender;
  }

I put in the no cache to prevent the constant refresh when pushing a new version.

I do believe this is the best solution… I will look to get off prerender.io in the future.

I also noticed the initial cache takes quite some time, perhaps I am going to make an get request from my own server with a ?escaped_fragment= tag on a new article post so it will auto cache…


#6
if ($prerender = 0) {

I hope that’s just a transcription error. I think you mean

if ($prerender == 0) {

or even ===


#7

This is in my NGINX config file… it only has 1 = in the example is that wrong? I dont know if the NGINX config is the same as JS.


#8

Im not running prerender in my own server, I am using the prerender official site for this (they sell their system but for less than 250 pages it is free). Im not completely sure how it does it but I guess it has to do with the prerender package installed in the platform and when a new request comes in maybe it sends it through prerender. Inside prerender.io yes, you can adjust from 1-30 days the re-cache of all the pages you have. I normally have them in 30 since I dont change much of them but if I do push a new update to the website I have to recache all the pages (its just 1 click).


#9

One = is an assignment, not a test.


#10

In nginx = is a test (equality test). Yeah, I know, against all sane rules out there.

Having said that, you should not use IFs in nginx, because they are evil.


#11

Oops. My bad. Sorry about that.


#12

I haven’t used the new ssr meteor has but if I remember correctly prerender avoids breaking your app by detecting if the viewer is a crawler or not and using ssr if it isn’t. If you need to writing a detector might not be too hard.