If you work with JAMstack sites (such as those running on React, Angular, or Vue.js) and want to know if your pages are correctly rendered and indexed, this post is for you.
Let’s dive in!
Basically, JS makes it possible for websites to include interactive features such as:
- Complex micro-interactions
- Slider controls
- Web-based games
- It can improve content rendering, boosting your rankings
- It can help you implement sustainable interlinking practices
- It can improve your loading speed
Even though SEO specialists and developers often work together, they prioritize different things. And that can be a challenge when it comes to ranking your site. Devs focus on making a website maintainable, reliable, performant, and functional. But sometimes they may overlook important SEO factors before shipping their code.
As simple as it sounds, a well-coded JS element could inadvertently hide content from Googlebots. For example, let’s say you have an accordion that contains very important content. When the user clicks on its toggle, the accordion opens and the information is shown. But, depending on how that element is developed, the content may not be easily accessible to search engine bots.
- Replace a canonical tag
- Inject a different title
- Inject a different meta description
- Increase your Cumulative Layout Shift
- Make your site slower
As a result, Google may take longer than usual to crawl your pages and may even overlook the most relevant ones. In other words, sometimes even technically correct JS (from a developer’s standpoint) may cause crawling and indexing obstacles.
If you’re running a JAMStack website, you may come across crawling and loading speed issues. These are often solved by switching from client-side to server-side rendering. We’ll take a deeper look in the next section.
This process is often understood as consisting of two “indexing/crawling waves”. Two years ago, Google’s Martin Splitt questioned this idea, stating that “the wave is an oversimplification”.
However, we’ll use the wave metaphor to bring some clarity to this very abstract process.
Indexing Static HTML: The First Wave
First, Google decides which elements are necessary to represent the content on the page. To do this, it crawls the static HTML files, leaving out any linked CSS or JS files.
How? Well, Googlebot begins by crawling your site’s HTML URLs. As soon as the crawler requests an HTTP query to the server, it sends it the HTML document (the one the crawler downloads to read and execute). Then, Googlebot reads the robot.txt file. If the URL is incorrect (or it is marked as disallowed), it skips the HTTP request.
These resources include:
- HTML pages
- CSS files
- Requesting XHR
- Endpoints for APIs and more
Do these advances remove all worries around indexing? Not really.
In conclusion, indexing remains a challenge for a couple of reasons:
- Google takes a long time to index and render JS content due to limited computing resources.
- Google has difficulty in executing web rendering code in its entirety.
- How does Google crawl your web content?
- How does Google render your web content?
- How does Google index your web content?
The best results can be achieved if you follow some basic guidelines for SEO optimization.
We recommend you:
- Allow resource crawling
- Prevent duplicate content
- Keep your SEO basics in check
- Monitor your loading speed
- Server-side rendering
- Client-side rendering
- Dynamic rendering
In this section, we’ll take a look at each of them and share some SEO pros and cons.
Server-side Rendering (SSR)
SSR is the process of serving a webpage to a client with its HTML structure and dynamic content fully integrated. This type of rendering does the job of combining the structure and the content on the server, and delivering it to users (and search bots) “in one piece”.
Websites that are rendered server-side:
- Load fast
- Are unlikely to experience partial indexing
- Are more resource-intensive than their client-side counterparts
JAMstack websites that render client-side “assemble” the HTML structure and the dynamic content directly on the browser, when the user (or bot) requests it.
This type of rendering isn’t as heavy on the server versus the alternatives. But it’s far more likely to cause SEO issues.
- High cumulative layout shift
- Slow loading
- Partial indexing
Dynamic rendering combines the best of client-side and server-side rendering. It delivers a server-side version of the page to Googlebots and a client-site version to users. That way, you will:
- Prevent partial indexing issues
- Save on server resources
This solution has been promoted by Google, and it’s great for preventing crawling and indexing issues. However, it doesn’t solve potential page speed problems on the user’s side.
At this point, you may be wondering if this constitutes cloaking. As long as both versions show the same content, search engines do not consider it cloaking. In other words, the content that users and search engines see should be the same, only the rendering type should change.
In any case, you can check if your content’s been rendered correctly on Google Search Console. Here’s how:
- Enter the URL of the page you intend to inspect.
- Click the “real-time test” button on the top right side of the page, and after a few seconds, you’ll see a “real-time test” tab.
- Click on “view approved page” in the tab to see how Google has viewed it.
- Click “more info” to find out if there have been crashes, errors, or timeouts.
After determining if your website is rendering correctly, it’s time to see how it’s being indexed. You can do this through Google Search Console or through the search engine itself.
Using the browser:
- Use the site command on Google and enter the URL of the page you want to see. An example would be “site:exampledomain.com/page-URL”
- If the page is found in Google’s index, it’ll appear on the search result.
Using Google Search Console:
- Open the URL Inspection tab. You’ll find it on the sidebar, under the Performance label.
- Enter the URL you want to examine on the search bar and press Enter.
- If the page is indexed on Google, you’ll be able to visualize the source code for its indexed version, by clicking View Crawled Page.
- Check whether your content appears on the source code.
User-Agent: Googlebot Allow: .js Allow: .css
Prevent Duplicate Content
Oftentimes, a website displays the same content across different URLs. In Google’s eyes, this is considered duplicate content. There are many ways to handle your duplicate content. In some cases, the best solution is to select the URL you want to index (i.e. leave it visible to search engines) and set a canonical tag.
However, in some instances, you may want to delete one of the instances of duplicated content or set a 301 redirect.
Don’t Lose Sight of Your On-Page SEO Basics
On-page SEO is extremely important. Implement on-page SEO best practices to make sure that users and search engines can easily digest your content. Always make sure to:
- Set and optimize your pages’ title tags and metadescriptions
- Implement image alt tags
- Maintain your sitemaps
- Create a valid robots.txt file
Keep an Eye on Speed Performance
Therefore, you should work hand-in-hand with your developers to:
- Use Google’s PageSpeed Insight to monitor page speed and find new ways to optimize your website.
Keep an Eye on Your Site’s Code with SEORadar
We constantly monitor your code and let you know of any changes that may affect your SERP positioning. If we notice something, we’ll let you know, either through email, Slack, or SMS. If some potential changes worry you more than others, simply customize the priority of those change alerts. In short, we stay on top of your site’s code, so you don’t have to.