Three Sites Run Through IndexReady: Real SEO and GEO Scores with Item-Level Evidence
Why this report exists
IndexReady scores a public URL across 15 SEO items and 11 GEO items. That description alone makes it hard to judge how the score actually maps to a page's implementation. To make the mapping concrete, we ran the tool on three different kinds of public pages and published the full breakdown — the points, the detection evidence, and the limits of automated scoring.
The goals are:
- Show, with real numbers, what IndexReady actually measures
- Compare three pages with different design intents (a product homepage, a developer reference, an encyclopedia entry)
- Document which items reliably score high and which reliably miss
This is not a ranking-prediction post. The numbers below describe what IndexReady saw at the time of the scan, nothing more. The pages are referenced as factual examples of SEO and GEO patterns, not as targets of criticism.
Test conditions
Every analysis was run under the same conditions, so the steps are reproducible.
| Field | Value |
|---|---|
| Date of scan | 2026-05-09 |
| Environment | IndexReady (Next.js 14, Contabo VPS) |
| Data sources | HTML / robots.txt / sitemap.xml / llms.txt / Google PageSpeed Insights API |
| Measurement | PageSpeed score is mobile-based; CWV values come straight from the PSI API |
| Locale flag | ja (the URLs tested happen to be Japanese pages) |
The three URLs
| # | URL | Type |
|---|---|---|
| 1 | https://index-ready.jp/ja | Our own product homepage |
| 2 | https://developer.mozilla.org/ja/docs/Web/HTML/Element/title | MDN HTML element reference |
| 3 | https://ja.wikipedia.org/wiki/SEO | Japanese Wikipedia article |
All three are public pages. Wikipedia content is Creative Commons; MDN ships under an open license; index-ready.jp is the site we operate, so we have full freedom to test it. Each page was selected because it represents a different category of operator decisions, not as a comparison contest.
Summary
| Site | Total | SEO | GEO |
|---|---|---|---|
| index-ready.jp | 185 / 200 | 94 / 100 | 91 / 100 |
| MDN (HTML title) | 137 / 200 | 88 / 100 | 49 / 100 |
| Wikipedia (SEO) | 107 / 200 | 62 / 100 | 45 / 100 |
The total scores line up roughly with intuition, but the breakdowns reveal very different strengths and weaknesses. The rest of the report walks through each case.
Case 1: index-ready.jp/ja (185 / 200)
We started with our own homepage. If we cannot score our own product well, the scoring logic is broken — that is the baseline check.
Strong SEO items (94 / 100)
The detected values that earned full points:
- title:
IndexReady — SEO・生成エンジン最適化(GEO)対策を無料で自動採点するツール(46 chars) - meta description: 109 chars
- headings: one h1, hierarchical h2/h3
- OGP: og:title, og:description, og:image all present
- canonical:
https://index-ready.jp/ja - robots.txt + sitemap.xml: detected
- HTTPS, lang="ja", viewport, no rogue noindex
Strong GEO items (91 / 100)
- llms.txt and llms-full.txt are both present
- AI crawlers explicitly allowed: GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot, anthropic-ai
- JSON-LD: Organization / WebSite / WebApplication / FAQPage
- Concise paragraphs: 22 detected (full marks for "clear answers")
- Statistics in body text: 15 numeric mentions
- Freshness signals: datePublished and dateModified
What we lost
| Item | Score | Plan |
|---|---|---|
| PageSpeed | 60 / 100 | LCP at 7.1s. Audit fonts, JS, and the initial render path |
| Core Web Vitals | LCP 7.1s / CLS 0.001 | LCP first; preload the hero element |
| Question-format headings | 1 / 22 | Only one h2/h3 is phrased as a question |
| Citation quality | 2 authoritative refs / 3 outbound | Add primary-source links (web.dev, Search Central, Schema.org) |
What this case shows
The structural foundation (title, meta, canonical, OGP, sitemap, llms.txt, JSON-LD, FAQ) is in place. As a product homepage, that's the floor it should clear. The clear weakness is performance: 7.1s LCP is slow enough to register in real-user measurements. From an AdSense review or Search Console page-experience perspective, this is the item that warrants attention first.
Case 2: MDN — HTML <title> element (137 / 200)
We picked the MDN reference page for the <title> element to represent technical documentation. SEO-wise, MDN sets a high bar; the GEO numbers were a different story.
SEO 88 / 100
| Item | Result |
|---|---|
| title | 45 chars |
| meta description | 116 chars |
| headings | one h1, hierarchical |
| canonical | set |
| robots.txt | detected |
| sitemap.xml | detected |
The fundamentals are exactly where you would expect them for a site of MDN's size.
Where MDN dropped points on SEO
| Item | Score | Evidence |
|---|---|---|
| OGP | 0 / 6 | OGP tags not found |
| PageSpeed | 65 / 100 | PSI score below 90 |
| Core Web Vitals | LCP 6.7s / CLS 0.000 | LCP runs long |
The missing OGP was unexpected. When the page is shared on social platforms, there's no og:title or og:image to control the preview, so consumers see whatever the platform reconstructs. Reference docs may simply not prioritize social sharing, but it is a clean miss in the scorer.
GEO 49 / 100
| Item | Result |
|---|---|
| llms.txt | 0 / 12 (not present) |
| Structured data JSON-LD | 0 / 10 (none detected) |
| Google-recommended schemas | 0 / 8 (none detected) |
| Question-format headings | 0 / 8 |
| Freshness signals | 4 / 8 (only <time> element) |
| AI crawler permissions | 12 / 12 (all allowed) |
| Clear answer paragraphs | 8 / 8 (14 detected) |
| FAQ / list patterns | 8 / 8 (34 structured lists) |
MDN allows AI crawlers and the body has plenty of structured prose, but the machine-readable layers AI clients lean on (JSON-LD, llms.txt, question headings) are essentially absent.
What this case shows
"Strong SEO does not imply strong GEO." A long-running documentation site can have textbook SEO and still score modestly on GEO if it has not invested in JSON-LD, llms.txt, or question-format headings. This is the gap that GEO checks expose. For a new docs site, simply adding JSON-LD and an llms.txt up front would shift this score significantly.
Case 3: Wikipedia — SEO article (107 / 200)
Wikipedia represents an encyclopedia template at scale. Authority and citation density are top-tier; the technical scoring picture is more mixed.
Where Wikipedia is strong
| Item | Result |
|---|---|
| PageSpeed | 97 / 100 |
| Core Web Vitals | LCP 1.7s / CLS 0.000 |
| Citation quality | 10 / 10 (23 authoritative refs, 33 outbound links) |
| Statistics | 8 / 8 (5 numeric mentions detected) |
| HTTPS / canonical / lang / no rogue noindex | all OK |
| AI crawler permissions | 12 / 12 |
PSI 97 with a 1.7s LCP is the fastest of the three, and citation quality maxes out — Wikipedia is, after all, a network of references.
Where Wikipedia drops points
| Item | Score | Evidence |
|---|---|---|
| title | 5 / 10 | "SEO" (15 chars), recommended 30–60 |
| meta description | 0 / 10 | Not present |
| headings | 4 / 6 | h1 exists, but the scorer found no h2 |
| OGP | 2 / 6 | og:description and og:image missing |
| Image alt | 0 / 6 | 2 of 6 images had no alt |
| sitemap.xml | 0 / 6 | The article URL is not directly tied to a sitemap entry the scanner can resolve |
| llms.txt | 0 / 12 | Not present |
| JSON-LD | 6 / 10 | Article only |
| Clear answer paragraphs | 0 / 8 | Definition-style paragraphs not detected by the heuristic |
| Question-format headings | 0 / 8 | No h2/h3 captured |
| Freshness metadata | 0 / 8 | datePublished / dateModified not detected |
What this case shows
Wikipedia carries enormous authority and citation weight, but the scorer flags a lot of items because the encyclopedia template prioritizes a different kind of readability. The takeaway is not "Wikipedia has bad SEO." The takeaway is that machine-readable signals and human authority are different axes, and IndexReady measures only the former. To score well in this tool you need the implementation work — meta description, structured data, llms.txt — even if your content is already top quality.
Patterns across the three sites
| Dimension | index-ready.jp | MDN | Wikipedia |
|---|---|---|---|
| PageSpeed | 60 (LCP 7.1s) | 65 (LCP 6.7s) | 97 (LCP 1.7s) |
| Structured data | Organization / WebSite / WebApplication / FAQPage | none | Article |
| llms.txt | present | none | none |
| OGP | complete | none | partial |
| Citation quality | 2 / 10 | 5 / 10 | 10 / 10 |
What we observed
- Speed and structure are independent investments. The fastest site has the weakest structured-data story; the most structured site has the slowest LCP. Doing both takes deliberate engineering.
- OGP gets dropped at the article level. MDN and Wikipedia both fall short here. OGP is something to design once at the template level, otherwise individual pages bleed coverage.
- llms.txt is still rare. Only one of the three has it. As the spec stabilizes, the absence will become more visible — today it's an edge.
- GEO has more upside than SEO for established sites. MDN and Wikipedia already have rich content; adding JSON-LD, llms.txt, and question headings would lift their GEO numbers without touching the body copy.
- PSI numbers depend on hosting and frontend strategy. Our own LCP suffered because of font loading and SSR cold paths on the VPS. Moving fonts to a CDN or restructuring the hero image would likely close the gap.
What the score does not see
IndexReady reads what it can fetch. The following are out of scope and never enter the score:
- Real E-E-A-T (the author's actual expertise, the factual accuracy of the article)
- Real placement in Google search results or AI Overview
- External signals like backlinks and brand authority
- Hard-to-quantify UX dimensions (readability, voice, design polish)
- Content that only appears after a JavaScript runtime — IndexReady evaluates server-rendered HTML
A high score does not guarantee traffic, and a lower score does not mean the site is bad. The score is a checklist for machine-readable implementation; outcomes depend on competition, content depth, and audience trust.
Summary
Three sites, three different shapes of result.
- The product homepage (index-ready.jp) clears the structural fundamentals on both sides; the weak spot is page performance.
- The technical documentation page (MDN) has solid SEO basics but has not invested in the GEO layer.
- The encyclopedia page (Wikipedia) is the fastest and the most cited, yet a template that emphasizes encyclopedic readability gives up a lot of machine-readable points.
Each site optimizes for what its audience actually needs. IndexReady is not a ranking oracle — it is a way to make the machine-readable part of your implementation visible. We will keep adding similar reports so the tool's evaluations stay tied to real, reproducible data instead of generic checklists.