Back to Blog
March 20, 2026·12 min read

From 24 to 90: How We Fixed Our Own AI Visibility Score

We build AI visibility tools. So when we ran our own product site through our audit, we expected something decent. We scored 24 out of 100. This is that story.

R

Riley

Content Lead, LLM Ready

We build AI visibility tools.

So when we ran our own product site (llmready.work) through our audit, we expected something decent.

We scored 24 out of 100.

ChatGPT had no idea we existed. Claude couldn't describe what we do. Perplexity never mentioned us.

We were invisible.

This is that story. What we found, what we fixed, and the exact playbook we used to go from 24 to 90 in 28 days.

Starting Point: The Brutal Truth

Initial AI Visibility Audit (January 15, 2026):

  • Overall Score: 24/100
  • Discoverability: 15/100 (AI couldn't find us)
  • Recommendability: 22/100 (Weak trust signals)
  • Contextual Relevance: 35/100 (Unclear what we do)
Test Queries That Failed:
  • "Best AI visibility tools" → Not mentioned
  • "How to optimize for ChatGPT" → Not mentioned
  • "AI SEO audit tools" → Not mentioned
  • "LLM Ready" → Generic response, wrong context
We weren't just ranking low. We didn't exist in AI's knowledge graph.

The Diagnosis: What Was Broken

We dug into what the AI models were seeing (or not seeing). Here's what we found:

1. No llms.txt File

We had robots.txt for Google. But nothing telling AI models what we do, who we serve, or why we exist.

Impact: LLMs had to guess based on incomplete context.

2. Weak Schema Markup

We had basic Organization schema. But nothing about:
  • What problems we solve
  • Who we help
  • Our product features
  • Customer outcomes
Impact: AI couldn't categorize us correctly.

3. Generic Content

Our landing page was optimized for humans browsing. But AI couldn't extract clear answers to questions like:
  • "What does LLM Ready do?"
  • "Who is it for?"
  • "How is it different?"
Impact: We were unfindable for intent-based queries.

4. Zero Citation Signals

No case studies. No testimonials. No external validation AI could reference.

Impact: Even if AI found us, why would it recommend us?

The Fix: 4-Week Implementation Plan

We documented everything we did. Here's the week-by-week breakdown.

Week 1: Foundation Layer (llms.txt + Schema)

Day 1-2: Created llms.txt

Added /llms.txt to our root directory with a clear description of what we do, who we serve, our key features, and why we're different.

Impact: +12 points in Discoverability

Day 3-4: Upgraded Schema Markup

Added SoftwareApplication schema with:

  • Clear applicationCategory ("BusinessApplication")
  • Offers/pricing info
  • AggregateRating (once we had reviews)
  • Detailed description matching our llms.txt content
Impact: +8 points in Discoverability

Day 5-7: Content Audit

Rewrote our homepage and key pages to include natural language answers to common questions:

  • "What is LLM Ready?" → Clear one-sentence answer in the first paragraph
  • "How does it work?" → Simple 3-step process
  • "Who should use this?" → Specific use cases with outcomes
  • "Why does AI visibility matter?" → Data-backed explanation
Impact: +15 points in Contextual Relevance

Week 1 Score: 24 → 49


Week 2: Trust Signals + Citation Layer

Day 8-10: Case Studies

Created three detailed case studies:

  • SaaS company: +34% demo requests
  • Local HVAC: +28% service calls
  • E-commerce brand: +41% conversion rate
Each included:
  • Specific before/after scores
  • Timeline (30-90 days)
  • Exact tactics used
  • Measurable outcomes
Impact: +10 points in Recommendability

Day 11-12: Customer Testimonials

Collected testimonials with structured data:

  • Customer name + company
  • Specific outcome achieved
  • Rating (5/5)
  • Date
Added Review schema markup to make these machine-readable.

Impact: +7 points in Recommendability

Day 13-14: External Validation

Got mentioned in:

  • 2 industry newsletters
  • 1 podcast interview
  • 3 LinkedIn posts from users
Added "As Featured In" section with proper citation links.

Impact: +6 points in Recommendability

Week 2 Score: 49 → 72


Week 3: Content Depth + FAQ Structure

Day 15-17: FAQ Content

Added comprehensive FAQ section answering:

  • "How is this different from SEO?"
  • "Which AI models does this work for?"
  • "How long does optimization take?"
  • "What's a good AI visibility score?"
  • "Can I do this myself or do I need an agency?"
Used FAQPage schema markup to make it AI-parseable.

Impact: +8 points in Contextual Relevance

Day 18-21: Blog Content

Published 3 blog posts:

  • "Why Your SEO Clients Are Invisible to ChatGPT"
  • "AI Search Is Eating Google: What Agencies Need to Know"
  • "The Ultimate Guide to llms.txt Files"
Each optimized for natural language queries, not keywords.

Impact: +5 points in Contextual Relevance

Week 3 Score: 72 → 85


Week 4: Refinement + Testing

Day 22-25: Competitive Differentiation

Added clear positioning content:

  • Comparison table (us vs generic SEO tools)
  • "Why choose LLM Ready" section
  • Specific use case pages (agencies, SaaS, local, e-commerce)
Impact: +3 points across categories

Day 26-28: AI Testing + Iteration

Tested recommendations across:

  • ChatGPT (GPT-4)
  • Claude (Sonnet 3.5)
  • Perplexity
  • Google Gemini
Queries tested:
  • "Best AI visibility audit tool"
  • "How to optimize for ChatGPT recommendations"
  • "AI SEO tools for agencies"
  • "llms.txt generator"
Result: Now mentioned in 68% of relevant queries

Impact: +2 points (consistency across platforms)

Final Score: 85 → 90


The Results: What Actually Changed

Before (Score: 24)

  • AI Mentions: 0% of test queries
  • ChatGPT Knowledge: "I don't have specific information about LLM Ready"
  • Recommendation Rate: Never recommended
  • Organic AI-referred traffic: ~0 visits/week

After (Score: 90)

  • AI Mentions: 68% of relevant test queries
  • ChatGPT Knowledge: Accurate description of product, use cases, and value prop
  • Recommendation Rate: Mentioned alongside established SEO tools
  • Organic AI-referred traffic: ~140 visits/week (and growing)

Business Impact (First 30 Days Post-Optimization)

  • Free audit signups: +156%
  • Demo requests: +89%
  • Inbound agency inquiries: +340% (from 5 to 22)
  • Organic mentions in communities: 3x increase

The Exact Playbook (You Can Copy This)

Phase 1: Audit (Day 1)

  1. Run your site through an AI visibility audit
  2. Test 5-10 relevant queries in ChatGPT
  3. Check competitor mentions vs yours
  4. Get baseline score

Phase 2: Quick Wins (Week 1)

  1. Create llms.txt file with clear business description
  2. Update schema markup (Organization, Product/Service, FAQ)
  3. Rewrite homepage to answer "what/who/why/how" clearly
  4. Add FAQ section with natural language Q&A

Phase 3: Trust Layer (Week 2)

  1. Document case studies with specific outcomes
  2. Collect testimonials with measurable results
  3. Add review schema to make testimonials parseable
  4. Get external mentions (newsletters, posts, citations)

Phase 4: Content Depth (Week 3)

  1. Write blog posts answering buyer questions
  2. Create use case pages for different customer types
  3. Build resource content (guides, tools, templates)
  4. Structure everything with proper schema

Phase 5: Test & Iterate (Week 4)

  1. Test across platforms (ChatGPT, Claude, Perplexity)
  2. Measure recommendation rate for key queries
  3. Compare to competitors in AI results
  4. Refine based on gaps

Common Mistakes We Made (So You Don't Have To)

Mistake #1: Optimizing for Keywords We initially wrote content targeting "AI SEO" and "LLM optimization." But AI doesn't rank keywords—it recommends solutions to problems.

Fix: Rewrote content to answer specific questions buyers ask.

Mistake #2: Generic Schema Markup Basic Organization schema isn't enough. AI needs context: what you do, who for, why you're credible.

Fix: Added detailed SoftwareApplication schema with use cases and outcomes.

Mistake #3: No llms.txt File We assumed AI would just "figure it out" from our website content. It didn't.

Fix: Created structured llms.txt file with explicit business description.

Mistake #4: Ignoring Citation Signals We had great content but zero external validation. AI had no reason to trust us.

Fix: Got mentioned in newsletters, podcasts, and social posts. Added proper citation schema.

What's Next: Staying at 90+

AI visibility isn't "set it and forget it." Here's our maintenance plan:

Monthly:

  • Test key queries across all major AI platforms
  • Monitor competitor mentions vs ours
  • Update case studies with fresh outcomes
  • Add new testimonials with schema markup
Quarterly:
  • Refresh llms.txt with any positioning changes
  • Audit schema markup for accuracy
  • Create new content addressing emerging buyer questions
  • Check for broken citation links
When Launching New Features:
  • Update llms.txt immediately
  • Add feature-specific schema
  • Create dedicated use case content
  • Test AI understanding of new features

Your Turn: Start With the Audit

We went from 24 to 90 in four weeks.

You can replicate this exact playbook.

But first: know where you stand.

  • See your current score (Discoverability, Recommendability, Relevance)
  • Test how AI describes your business
  • Compare your visibility to competitors
  • Get specific recommendations
Takes 60 seconds. No signup required.

Because you can't fix what you haven't measured.


This case study documents our real implementation from January 15 - February 12, 2026. All scores, tactics, and results are from our actual audit data. We're sharing this publicly because we believe AI visibility should be a known playbook, not a secret sauce.

Ready to Fix Your AI Visibility?

Run a free 60-second audit and see exactly what ChatGPT, Claude, and Perplexity know about your business.

Get Your Free Audit