JavaScript has become the dominant way businesses build websites. Most modern frameworks — React, Vue, Angular, Next.js — render some or all page content dynamically using JavaScript. For traditional Google SEO, this has been manageable — Googlebot executes JavaScript (albeit with some delay). But for AI crawlers and many other web crawlers, JavaScript rendering is a black box.
This guide explains the JavaScript rendering problem for AI crawlers, which AI platforms handle it better than others, and what you can do to ensure your key content is accessible to AI.
The JavaScript Rendering Problem for AI
When a crawler visits a web page, it receives the HTML response from the server. If the page content is rendered by JavaScript (loaded after the initial HTML), a crawler that doesn't execute JavaScript will only see the initial HTML — which may be nearly empty for heavily client-side rendered applications.
Consider a typical React SPA (Single Page Application):
What the server sends:
<!DOCTYPE html>
<html>
<head>...</head>
<body>
<div id="root"></div>
<script src="bundle.js"></script>
</body>
</html>
What the crawler sees (without JS execution):
An empty <div id="root"></div> with no content.
What the user sees (after JS executes): Your full product, services, contact information, and content.
For AI crawlers that don't execute JavaScript, your entire website might as well be blank.
Which AI Crawlers Execute JavaScript?
This varies significantly by platform:
| Crawler | JavaScript Execution | |---|---| | Googlebot (used by Gemini) | Yes — executes JavaScript, though with delays | | PerplexityBot | Limited — may execute simple JS, not complex SPAs | | GPTBot (OpenAI) | No — static HTML only | | anthropic-ai (Claude) | No — static HTML only | | Common Crawl | No — static HTML only | | BingBot (used by Copilot) | Limited — better than most non-Google crawlers |
The practical implication: critical business information rendered only by JavaScript will be invisible to GPTBot and many other AI training crawlers. This directly impacts how well these AI models understand and represent your business.
Common Invisible Content in JavaScript-Rendered Sites
The most common types of content that AI crawlers miss:
Business Information in React Components
// This content is invisible to most AI crawlers
const BusinessInfo = () => (
<div>
<h2>Contact Us</h2>
<p>123 Main St, Austin, TX 78701</p>
<p>(512) 555-1234</p>
</div>
);
Dynamically Loaded FAQs
FAQ sections that expand/collapse via JavaScript are a common pattern — but the answers may not be in the initial HTML.
Late-Loaded Reviews
Review widgets that load via API call after page render are invisible to most AI crawlers.
JavaScript-Rendered Schema
Some sites inject JSON-LD via JavaScript rather than including it in the server HTML response. AI crawlers that don't execute JS won't see this schema.
Solutions: Making Content Visible to AI Crawlers
1. Server-Side Rendering (SSR)
SSR renders the page on the server before sending it to the client, ensuring the initial HTML response contains all content. This is the most complete solution.
Frameworks that support SSR:
- Next.js — React with SSR built-in (default App Router behavior)
- Nuxt.js — Vue with SSR
- SvelteKit — Svelte with SSR
- Remix — React with built-in SSR
Example (Next.js): With Next.js App Router, your components render server-side by default, producing HTML that all crawlers can read:
// This renders server-side — content is in the initial HTML
export default async function ServicesPage() {
return (
<main>
<h1>Our Services</h1>
<p>We provide comprehensive dental care in Austin, TX.</p>
</main>
);
}
2. Static Site Generation (SSG)
SSG pre-renders pages at build time, creating static HTML files that all crawlers can read immediately. Ideal for marketing sites where content doesn't change with every request.
Most SSR frameworks also support SSG for static pages.
3. JSON-LD Schema in Document <head>
If your site uses client-side rendering, ensure JSON-LD schema is included in the server-rendered <head>, not injected by JavaScript:
<!-- Good: In the server-rendered head -->
<head>
<script type="application/ld+json">
{"@context": "https://schema.org", "@type": "LocalBusiness", ...}
</script>
</head>
<!-- Avoid: JavaScript-injected schema -->
<script>
// This may not be seen by JS-blind crawlers
const script = document.createElement('script');
script.type = 'application/ld+json';
script.text = JSON.stringify({...});
document.head.appendChild(script);
</script>
4. Prerendering for Non-JS Crawlers
Some hosting platforms and reverse proxies support "prerendering" — detecting crawler user agents and serving a pre-rendered HTML version instead of the JavaScript bundle.
Services:
- Prerender.io — Detects crawlers and serves cached, rendered HTML
- Rendertron — Google's open-source prerendering solution
- Cloudflare Worker — Can be configured to serve prerendered pages for bots
This allows you to keep a JavaScript frontend for users while serving crawlable HTML to bots.
5. HTML-First Dynamic Content Loading
For content that must be dynamic (like reviews pulled from an API), implement "progressive enhancement":
- Include a static version of the content in the server HTML
- Replace it with live data after JavaScript loads
This ensures crawlers see real content rather than a loading spinner.
6. Critical Content in Server-Rendered Elements
Identify the most AI-critical content on your pages and ensure it's server-rendered:
- Business name, address, phone number
- Service descriptions
- FAQ sections
- Team and credentials
- Pricing
Dynamic features (interactive maps, real-time availability) can remain client-side, but this core content should be in the initial HTML response.
Auditing Your Site for AI Crawlability
Test what AI crawlers actually see from your website:
Using curl (simulates a non-JS crawler):
curl -L https://yourdomain.com | grep -i "service\|hours\|phone\|address"
If key content doesn't appear in the curl output, it's JavaScript-rendered and may be invisible to many AI crawlers.
Google's Mobile-Friendly Test and Rich Results Test: Google's tools execute JavaScript before evaluating. Use these to test Google's view, but be aware that most non-Google AI crawlers see the non-JS version.
View Page Source: In your browser, right-click → View Page Source (not Inspect Element, which shows the DOM after JavaScript runs). This shows what crawlers receive in the initial HTML response.
The Business Impact of Poor AI Crawlability
For businesses with heavily JavaScript-rendered sites, the AI visibility impact can be significant:
- GPTBot (ChatGPT training) cannot learn your specific services and pricing
- Claude training crawlers may represent you inaccurately or not at all
- Your schema markup may be invisible if injected by JavaScript
If you've wondered why AI platforms seem to poorly represent your business despite a modern, professional website — JavaScript rendering may be the reason.
Q: Does React/Vue automatically make content invisible to AI? A: Not necessarily — it depends on whether you use SSR. React and Vue with SSR (Next.js, Nuxt.js) render server-side and produce AI-crawlable HTML. React and Vue without SSR (traditional Create React App, Vite SPA) render client-side and produce JavaScript-dependent content that many AI crawlers cannot read.
Q: If Googlebot can execute JavaScript, why do other AI crawlers need server-side rendering? A: Googlebot has years of development specifically for JavaScript rendering, with massive infrastructure to execute JavaScript at scale. Most other crawlers — including AI training crawlers — are simpler systems optimized for speed rather than JavaScript execution.
Q: Will AI crawlers ever catch up to Googlebot's JavaScript execution capability? A: Possibly, over time. But the trend in web performance is also toward SSR and hybrid rendering, which serves both users and crawlers well. Investing in server-side rendering benefits your traditional SEO, Core Web Vitals, and AI crawlability simultaneously.