I just re-wrote my website (https://sidkwakkel.com) in Meteor. To begin with, the site is pretty simple, and I was happy with how quickly I could get to a Minimum Viable Product (MVP). That said, I have an important aspect to resolve before i go too much further.
I find that Google is not picking up (i.e. indexing) my blog posts. I’ve read some forums that say I should be server-side rendering, and others that say that Google should be able to read by posts as-is. Any current advice on this?
How many blog posts do you have? You can integrate Prerender.io for free if you have < 250 pages.
Otherwise it’s hit-and-miss with Google. If you believe the rumors on Hacker News, there exist several Googlebots: “dumb” ones that don’t execute JavaScript, and “smart” ones that do.
Have you registered with the Google Search Console and submitted an XML sitemap? I did that and had no problem getting Google to accurately index all my (3,000+) JS Meteor pages.
But if you want absolute certainty, Prerender.io is probably your best bet.
OpenGraph is something you still need to consider. Can bots like Facebooks open graph protocal correctly read your website content, for social sharing? This is an added benefit of PreRender.
I have everything 100% working at www.StarCommanderOnline.com if you inspect it, you’ll see it’s all groovy.
Just a quick thing guys, since the blog post is dynamic application. The posts are going to be added, and sitemap is going to be regenerated, but how can we resubmit the sitemap to the google web master? @regular_human@sidae@SkyRooms any insights? Thanks alot
Two separate tasks there (1) generate the sitemap (2) submitting it to Google.
First, you can try a couple of packages out there in the wild, like gadicohen:sitemaps. I’ve never used them but they might save you time. AFAIK Meteor has no out-of-the-box sitemap generation methods.
What I do is within my application create a page that serves only the raw sitemap data, then use a cron job and bash script to curl that page, parse the response, and write it to disk as sitemap.xml.
As for submitting to Google, you can automate that process as well via their API if you don’t want to use their Search Console web interface.
Really thanks @regular_human for clarifying all these things, just a quick thing why and where you used cron job and bash script. what I was thinking is using simple FS function to write a file ( sitemap.xml) and then using cron job to submit that file to the google through there API.