How to Get Cited by ChatGPT, Claude, and Perplexity

April 21, 2026 · OnePoint Solutions · AEO, answer engine optimization, ChatGPT, Claude, Perplexity, AI search, small business, citations

There’s a reason a small subset of businesses keep showing up by name when people ask AI assistants for recommendations, and most don’t. The AIs aren’t picking randomly. Each one has a specific way of sourcing answers, a specific set of preferences, and a specific bar a website has to clear to get cited.

Understanding those differences is the difference between being invisible and being a default answer. Here’s the practical breakdown for the three platforms that matter most right now: ChatGPT, Claude, and Perplexity. Plus how to find out if you’re already being cited.

How Each AI Sources Its Answers

The three platforms look similar from the outside, but their citation behaviour is meaningfully different.

ChatGPT (OpenAI)

ChatGPT has two distinct modes: a base model that answers from training data, and a search mode (sometimes called “browse” or “search the web”) that actively retrieves and cites real web pages.

When a user asks a current or local question — “best plumber in Calgary,” “how much does a wedding cost in 2026” — ChatGPT increasingly defaults to search mode. It hands the query to Bing, retrieves the top results, and writes an answer that cites a handful of sources.

The implication: if you want to be cited by ChatGPT, you need to rank well in Bing. Not Google. Bing. Most small businesses don’t think about Bing rankings at all, which means most aren’t optimized for the search index ChatGPT actually uses.

ChatGPT’s crawler for indexing additional content is GPTBot. If your robots.txt doesn’t explicitly allow it, you’re depending entirely on Bing’s index to surface you.

Claude (Anthropic)

Claude also has a search mode, and like ChatGPT it relies on a search backend rather than its own index. Claude’s web search is architected to favour authoritative sources, schema-rich pages, and content that directly answers the asked question.

Claude is unusually good at extracting facts from FAQ schema and from llms.txt files. In our testing, sites with well-written llms.txt files get cited substantially more often by Claude than by ChatGPT — Claude actively reads and references them.

Claude’s crawler is ClaudeBot. The bot respects robots.txt strictly, so an explicit allow is required for it to index your content.

Perplexity

Perplexity is the most search-native of the three. Every answer is built around citations by design — there’s no mode where it just talks from training data. Every claim has a source link.

Perplexity’s index is its own, built from a custom crawler (PerplexityBot). It pulls heavily from sites with strong technical SEO, clean schema markup, and well-structured content. Perplexity also weights freshness more than the other two — recent content often outranks older content on the same topic.

Because Perplexity citations are visible to users (not summarized away), it’s also the easiest platform to get measurable, demonstrable citation traffic from. People click the cited sources.

What Each AI Prefers in Content

The three platforms agree on the basics — fast, mobile-friendly, well-structured pages with clean HTML — but each has its own quirks.

ChatGPT strongly prefers content that ranks on Bing. That means the standard SEO playbook still applies: title tags that match query intent, headings that signal topic, internal linking, backlinks from authoritative domains. ChatGPT’s citations skew toward the same kinds of sites that win at general search.

Claude rewards explicit structure. Schema markup (especially FAQPage, Service, Offer), clear H1/H2 hierarchy, and well-written llms.txt files punch above their weight. Claude is also more willing than ChatGPT to cite smaller, niche sites if their content is well-structured and on-topic.

Perplexity is the most directly responsive to recency and to specific, factual content. Lists, tables, comparison content, and dated articles get cited more than evergreen prose. Perplexity also favours sites with clear authorship attribution — a named author on a post helps.

Across all three: long, vague, marketing-heavy pages don’t get cited. Short, specific, fact-rich pages do.

Technical Requirements Per Tool

If you want to maximize citation likelihood, here are the platform-specific must-haves:

For ChatGPT

For Claude

For Perplexity

For all three

Content Patterns That Get Cited

Across the three platforms, certain content patterns dominate citations.

Direct-answer pages. Pages titled “How much does X cost?” or “What is Y?” that answer the question in the first paragraph. Don’t bury the lede.

Comparison content. “X vs Y” articles, “Best X for Y use case” guides. These match common query patterns directly.

FAQ-format pages. Q&A content with proper FAQPage schema. The cleanest, most-citable format.

Listicles with explanation. “Five things to look for in a [your service] provider” — useful, scannable, and cite-friendly.

Local guides. “How to choose a [service] in [your city]” — directly matches local intent queries.

Dated, current-information pages. “[Industry] trends for 2026,” “What changed in [topic] this year.” Perplexity especially loves these.

What gets ignored: marketing-heavy “about us” pages, generic services pages with no specifics, blog posts that don’t answer a question.

How to Test If You’re Already Being Cited

This is the test most small businesses skip, and it’s the most diagnostic check you can run.

Step 1: Direct query. Open ChatGPT (with web search enabled), Claude, and Perplexity. Ask each one: “What does [your business name] do?”

If the answer is accurate, specific, and includes facts about your services or location, you’re being cited. If the answer is generic, missing, or wrong, you’re not.

Step 2: Category query. Ask: “Best [your service] in [your city].” Does your business name appear in the answer?

Step 3: Long-tail query. Ask: “Where can I get [specific service] in [your city]?” or “[your service] near [neighbourhood].” These specific queries are the most diagnostic — they’re what real customers ask.

Step 4: Competitor check. Ask: “Who are the top [your category] in [your city]?” If your competitors come up and you don’t, you have a clear gap.

Most small businesses we audit fail every test. About a quarter pass the first test (direct name query) but fail category queries. Maybe 5% are actively cited on category and long-tail queries. The 5% are the ones that have actively done Answer Engine Optimization work.

What to Do If You’re Not Being Cited

The fix is rarely “more content.” The fix is usually:

  1. Add the missing AI bot allows to robots.txt (15 minutes).
  2. Publish a llms.txt at site root (30 minutes).
  3. Add LocalBusiness, Service, Offer, and FAQPage schema (1-2 hours).
  4. Rewrite your homepage to lead with a clear identity sentence.
  5. Add comparison or buyer-guide content to your blog.

That stack alone moves most sites from invisible to citable within 2-6 weeks — the time it takes for crawlers to re-index and for AI training data to refresh.

Why This Will Matter More, Not Less

A reasonable share of search behaviour has already shifted from Google’s blue links to AI answers. The shift will continue. Voice assistants, integrated AI in OS-level search, and AI-first browsers all amplify the trend.

The businesses that build their AEO infrastructure now compound advantage every month. The ones that wait pay more later, in catch-up work and in lost leads to competitors who started earlier.

If you want help making sure your business is being cited — and not your competitor — that’s exactly what we do at OnePoint Solutions. Start with our AEO Audit, or reach out and we’ll take a look at your site.