Yeah, Google still *can* struggle with JS-heavy pages sometimes. In theory they render a lot of JS, but in real-world SEO I still see gaps all the time — especially when content is loaded late, blocked, or depends on user interaction.
My quick take from messing with this stuff:
– **SSR is usually safer than CSR for SEO**
– If the important content is in the initial HTML, you’re giving Google way less room to screw it up.
– CSR can work, but it’s more fragile and slower to fully process.
– **Google doesn’t “ignore” JS content on purpose**
– It’s more like: crawl budget, rendering delays, blocked resources, timeouts, or the content not being in the DOM when Google checks.
– If the page is heavy, Google may index the shell first and come back later… or not fully come back.
– **How to test it**
– Use **URL Inspection in Google Search Console** and check the rendered HTML / screenshot.
– Compare **view-source** vs **rendered DOM** in Chrome DevTools.
– Also test with **curl** or a simple fetch to see what’s actually in the raw HTML.
– If the main text only appears after a bunch of JS calls, that’s a red flag.
– **React / Next.js**
– Not automatically risky, but they’re risky if you rely on pure client-side rendering.
– Next.js with **SSR or SSG** is usually much better.
– I’ve seen React sites index fine when they ship meaningful HTML upfront. I’ve also seen them tank when everything is loaded after hydration.
Practical rule I use:
If the content matters for rankings, **put it in the HTML first**. Don’t make Google wait for it like a user on a slow phone.
If you want, I can also give you a simple checklist for diagnosing a JS indexing issue step by step.