Looking at the graphic we can assume that ReactJS is supported by the google bot? – No, it’s a lot more complicated than that.
To understand why, we need to have the knowledge of how crawlers get our URLs and index them.
For more complex debugging try command (Linux or Mac): curl –user-agent “Googlebot/2.1 (+http://www.google.com/bot.html)” http://exmaple.com
You’re probably confused as the green ticks on the infographic suggest the support.
Yes, there is limited support but it’s not the Crawler interpreting the website. Google has developed a “Web Rendering Service” that is a separate piece of software and it will run at different times to the main Crawler.
1st week – crawl homepage (crawler) and schedule the “Web Rendering Service” to visit the page —> render homepage using “Web Rendering Service” and find all the links and relevant info (does not follow links on its own)
2nd week – crawl homepage (data from “Web Rendering Service” is used to crawl links previously not seen) —> render homepage using “Web Rendering Service” and find all the links and relevant info (does not follow links on its own)
As you can see those 2 software pieces don’t run together very well and the Crawler runs on its own schedule independently of what the “Web Rendering Service” is doing as well as it is the ultimate decision maker in the process of indexing your website. You can also notice that there is a minimum of 1 week lag in indexing pages returned by the “Web Rendering Service” which can be very undesirable for quickly changing content e.g. e-commerce shops, news website. If you have a large website, it could take an unreasonably long amount of time to index every page.
It’s also important to understand that the “Web Rendering Service” has a hard cut off point after 5 seconds of loading, which means that if your website loads for longer than 5 seconds the service will quit and not pass anything back to the crawler. This will cause your website to be invisible to Google.
But not all hope is lost.
There are mechanisms that can make your website visible to the crawler:
The main idea behind those mechanisms is to get the HTML rendered before it’s received by the browser like the CGI method I described at the beginning of this article where the server will serve pre-rendered HTML for the search engine and non-rendered for the standard user. We can confirm that this method works when your website loads more than 5 seconds and the “Web Rendering Service” sees nothing. However, we cannot confirm what the penalties SEO penalties can be applied if the Crawler and “Web Rendering Service” do not agree on the content seen. The user-agent detection is critical here and any small error can cost rankings or apply long term penalties.
It is a cool, trendy and new way of making websites, however, the tradeoffs in the area of SEO are too big at the present: