You built a React app, added Open Graph meta tags in index.html, pushed to production, and shared the URL on Slack. The preview is blank — no title, no image, no description. Your tags look correct in the browser, so what went wrong? The answer is how social crawlers work, and it has nothing to do with your tag syntax.
How Social Crawlers See a React SPA
Social media crawlers — Twitterbot, LinkedInBot, facebookexternalhit, Slackbot — are not full browsers. They fetch the raw HTML from your server and stop there. They do not execute JavaScript, do not wait for React to hydrate, and do not render your components. What they see is the static shell your server returns before React runs.
For a typical Create React App or Vite SPA, that static shell is index.html — a nearly-empty document with a <div id="root"></div> and a few script tags. Any <meta> tags you added there are static and identical for every URL. The crawlers read those tags and build a preview from them — which is why every page on your site shows the same generic preview, or no preview at all if you didn't add tags to index.html.
Why react-helmet and document.title Don't Help
Libraries like react-helmet, react-helmet-async, and @tanstack/react-head let you set <meta> tags from your React components. They work perfectly for users browsing your site in a real browser — the tags are updated dynamically as routes change. But they have no effect on social crawlers, because crawlers never run the JavaScript that calls these libraries. From the crawler's perspective, your react-helmet calls don't exist.
Three Ways to Fix OG Tags in a React App
1. Switch to Next.js (or Remix)
The cleanest solution is to use a framework that renders HTML on the server. Next.js App Router ships with a first-class Metadata API that generates correct <meta> tags in the server response for each route. No separate library needed, no configuration required.
// app/products/[id]/page.tsx
import type { Metadata } from 'next'
export async function generateMetadata({ params }: { params: { id: string } }): Promise<Metadata> {
const product = await getProduct(params.id)
return {
title: product.name,
description: product.description,
openGraph: {
title: product.name,
description: product.description,
images: [{ url: product.imageUrl, width: 1200, height: 630 }],
},
twitter: { card: 'summary_large_image' },
}
}Next.js calls generateMetadata on the server for each request and injects the returned tags into the HTML before sending it to the client. The crawler receives a fully-formed HTML page with the correct meta tags for that specific route.
2. Add a Prerendering Service (No Framework Change Required)
If migrating to Next.js is not feasible, you can add a prerendering middleware that intercepts requests from crawlers and serves them a server-rendered HTML snapshot. Services like Prerender.io, Rendertron, and Netlify's crawler-friendly prerendering intercept requests where the user-agent is a known bot and return a prerendered version of the page.
The typical setup is a proxy or CDN rule: if the User-Agent matches a bot pattern, forward the request to the prerendering service; otherwise serve the normal SPA. This adds no complexity to your React codebase but introduces a dependency on an external service and a small latency overhead for bot requests.
3. Inject Per-Route Meta Tags at the CDN Edge
For specific, well-known routes (product pages, blog posts, marketing pages), you can use a CDN edge function or Cloudflare Worker to rewrite the HTML response and inject the correct meta tags before the client receives it. The CDN fetches your page's data, constructs the meta tags, and inserts them into the index.html shell.
// Example Cloudflare Worker pattern
export default {
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url)
const userAgent = request.headers.get('user-agent') || ''
const isCrawler = /twitterbot|linkedinbot|facebookexternalhit|slackbot/i.test(userAgent)
if (!isCrawler) {
return fetch(request) // serve SPA normally
}
// Fetch page data for this route
const data = await getPageData(url.pathname)
// Fetch index.html
const response = await fetch(request)
let html = await response.text()
// Inject OG tags before </head>
const ogTags = `
<meta property="og:title" content="${data.title}" />
<meta property="og:description" content="${data.description}" />
<meta property="og:image" content="${data.imageUrl}" />
<meta name="twitter:card" content="summary_large_image" />
`
html = html.replace('</head>', `${ogTags}</head>`)
return new Response(html, response)
}
}This approach keeps your React codebase unchanged and works well for routes with a predictable data shape. It becomes complex as the number of route patterns grows.
Which Approach Should You Use?
- —New project — use Next.js from the start. The Metadata API is purpose-built for this and requires no additional setup.
- —Existing CRA/Vite app, few shareable routes — edge injection at the CDN is the lowest-friction option.
- —Existing CRA/Vite app, many dynamic routes — a prerendering service is more maintainable than per-route edge logic.
- —Already on Next.js Pages Router — migrate to generateMetadata via getServerSideProps or getStaticProps returning metadata, or move to App Router.
Verifying What Crawlers Actually See
After making any of these changes, always verify the result by fetching your URL server-side — not by checking in a browser, where JavaScript has already run and filled in the tags correctly. Social crawlers see only the server-rendered HTML, so that's what you need to check.
OGProof fetches your URL exactly as each crawler does — server-side, with no JavaScript execution, using the correct user-agent for each platform. It shows you the actual tags that Twitter, LinkedIn, Facebook, Discord, Slack, and iMessage will read, and validates each tag against each platform's documented specs. If your React fix worked, you'll see accurate previews. If something is still missing, OGProof tells you which tags are absent and why.
- —Detects when your SPA is returning the same meta tags for every URL (sign that SSR is not working)
- —Shows the raw HTML the crawler received, before any JavaScript execution
- —Flags pages where og:title or og:image is missing or falls back to index.html defaults
- —Validates that dynamic routes are generating per-page metadata, not a site-wide fallback