Chris Olah is one of the most unusual figures in AI research: a Thiel Fellow who never completed a university degree, yet became one of the field’s most respected researchers. He pioneered AI interpretability research — the science of understanding what’s actually happening inside neural networks — and now continues that work at Anthropic, the company he co-founded. Forbes estimates his net worth at approximately $1.2 billion.
Background: Thiel Fellowship and Unconventional Path
Olah received a Thiel Fellowship — the $100,000 grant from Peter Thiel’s foundation that pays promising young people to skip or leave college and pursue their projects. The fellowship is notoriously selective and has been awarded to several founders and researchers who went on to have outsized impact. In Olah’s case, it enabled him to pursue AI research full-time before the field had matured into its current form.
He has no university degree of any kind — a remarkable fact in a field where PhDs are nearly universal among top researchers. His credentials come entirely from his published work, which speaks for itself.
Founding Distill: A New Kind of AI Publication
Olah co-founded Distill, an online journal dedicated to clear, visual, interactive explanations of machine learning research. Distill pioneered the idea that AI research could be communicated through interactive visualizations and careful writing — not just equations in PDFs. The journal won a Science Communication Award and influenced how a generation of researchers think about explaining their work.
Pioneering Interpretability Research
Olah’s most important scientific contribution is the development of neural network interpretability as a rigorous research area. Before his work, AI models were widely treated as inscrutable black boxes: you could measure their outputs, but understanding why they produced those outputs was thought to be essentially impossible.
Working across Google Brain, OpenAI, and now Anthropic, Olah developed techniques for understanding what individual neurons and circuits inside neural networks are doing — what features they detect, how they interact, and how they contribute to model behavior. This work has direct implications for AI safety: if you can understand what’s happening inside a model, you have a better chance of identifying and fixing problematic behaviors.
His research on “circuits” — the functional modules within neural networks — and on “superposition” — how models pack multiple concepts into single neurons — has opened entirely new lines of inquiry in the field.
Career Path: Google Brain → OpenAI → Anthropic
Olah’s research career moved through the major AI labs of the past decade: Google Brain, then OpenAI, then to Anthropic as a co-founder. At each stop, he continued his interpretability work, building on previous findings and training a generation of collaborators in the techniques he developed.
At Anthropic: Leading Interpretability Research
At Anthropic, Olah leads the interpretability research team — one of the company’s highest-priority research areas and a direct expression of Anthropic’s safety mission. The goal is to build the scientific foundation for understanding frontier AI models well enough to verify their alignment with human values, not just measure their outputs.
Net Worth
Forbes estimated Olah’s net worth at approximately $1.2 billion as of 2026, reflecting his co-founder equity stake in Anthropic. The figure reflects both his founding role and the enormous growth in Anthropic’s valuation since 2021.
Frequently Asked Questions
Does Chris Olah have a university degree?
No. Chris Olah is a Thiel Fellow who did not complete a university degree. He is one of the rare examples of a top AI researcher whose credentials come entirely from his published research rather than academic credentials.
What is Chris Olah known for?
Olah is known for pioneering AI interpretability research — the scientific study of what’s happening inside neural networks. He co-founded the Distill journal and developed foundational techniques for understanding neural network circuits and features.
What is Chris Olah’s net worth?
Forbes estimated approximately $1.2 billion as of 2026, based on his co-founder equity stake in Anthropic.
Jared Kaplan is the Chief Science Officer of Anthropic and one of the most consequential AI researchers alive. His 2020 paper on neural scaling laws — co-authored with Sam McCandlish and others — changed how every major AI lab thinks about model development. He is a TIME100 AI honoree, has testified before the U.S. Senate, and Forbes estimates his net worth at $3.7 billion. Yet outside of AI research circles, his name remains largely unknown to the general public.
Academic Background
Kaplan holds a PhD in physics, having trained as a theoretical physicist before pivoting to AI. Like several Anthropic co-founders, his physics background proved directly applicable to machine learning — particularly in developing the mathematical frameworks for understanding how AI systems scale. Physics training emphasizes finding simple underlying laws that explain complex phenomena, which is exactly what scaling law research does.
The Discovery That Changed AI: Scaling Laws
In January 2020, Kaplan and colleagues at OpenAI published “Scaling Laws for Neural Language Models” — a paper that demonstrated something remarkable: AI model performance improves in a smooth, predictable way as you increase model size, training data, and compute budget. The relationship follows a power law, meaning you can forecast how capable a model will be before training it, simply by knowing how much compute you’re using.
This was not merely an academic finding. It gave AI labs a roadmap: if you want a more capable model, you know roughly how much more investment is required. It directly enabled the aggressive scaling strategies that produced GPT-4, Claude 3, and every frontier model since. The paper has been cited tens of thousands of times and is considered foundational to the modern AI race.
Co-Founding Anthropic
Kaplan was among the seven OpenAI researchers who left in 2021 to found Anthropic. His technical authority — particularly in understanding what training configurations produce which capabilities — made him a natural fit as Chief Science Officer, the role he holds today.
Recognition and Public Profile
Kaplan was named to TIME’s 100 Most Influential People in AI, one of a handful of researchers recognized for foundational contributions rather than executive roles. He has testified before the U.S. Senate on AI safety and capabilities — bringing the technical perspective of a researcher who understands, at a mathematical level, how AI systems grow in power.
Net Worth
Forbes estimated Kaplan’s net worth at approximately $3.7 billion as of early 2026, reflecting his co-founder equity in Anthropic at the company’s current valuation. If Anthropic proceeds with its targeted IPO in late 2026, this figure could change substantially.
Frequently Asked Questions
What is Jared Kaplan known for?
Jared Kaplan is best known for co-discovering AI scaling laws — the mathematical relationships that predict how AI model performance improves with more compute, data, and parameters. His 2020 paper “Scaling Laws for Neural Language Models” is foundational to modern AI development.
What is Jared Kaplan’s role at Anthropic?
Kaplan is the Chief Science Officer of Anthropic, responsible for the company’s scientific research direction and the technical foundations of Claude’s development.
What is Jared Kaplan’s net worth?
Forbes estimated Jared Kaplan’s net worth at approximately $3.7 billion as of early 2026, based on his co-founder equity stake in Anthropic.
These terms get used interchangeably. They’re not the same thing. Here’s the actual distinction between each one, where the lines get genuinely blurry, and which category fits what you’re actually trying to build.
Chatbots
A chatbot is a software interface designed to simulate conversation. The defining characteristic: it’s stateless and reactive. You send a message; it responds; the exchange is complete. Each interaction is largely independent.
Traditional chatbots (pre-LLM) operated on decision trees — “if the user says X, respond with Y.” Modern LLM-powered chatbots use language models to generate responses, which makes them dramatically more capable and flexible — but the fundamental architecture is the same: you ask, it answers, you ask again.
What chatbots are good at: answering questions, providing information, routing conversations, handling defined service scenarios with natural language flexibility. What they’re not: action-takers. A chatbot can tell you how to cancel your subscription. An agent can cancel it.
Automations
Automations are rule-based workflows that execute when triggered. Zapier, Make, and similar tools are the canonical examples. When event A happens, do B, then C, then D.
The key characteristic: the path is predefined. Every step is specified by the person who built the automation. If an unexpected situation arises that the automation wasn’t built for, it either fails or skips the step. There’s no reasoning about what to do — there’s only executing the specified path or not.
Automations are highly reliable for well-defined, stable processes. They break when edge cases arise that weren’t anticipated. They scale perfectly for the exact task they were built for; they don’t generalize.
APIs
An API (Application Programming Interface) is a communication contract — a defined way for software systems to talk to each other. APIs are infrastructure, not agents or automations. They’re the mechanism through which agents and automations take action in external systems.
When an AI agent “uses Slack,” it’s calling Slack’s API. When an automation “posts to Twitter,” it’s calling Twitter’s API. The API is the door; agents and automations are the things that open it.
Conflating APIs with agents is a category error. An API is a tool, not a behavior pattern.
AI Agents
An AI agent takes a goal and figures out how to accomplish it, using tools available to it, handling unexpected situations along the way, without a human specifying each step.
The distinguishing characteristics versus the above:
vs. Chatbots: Agents take action in the world; chatbots respond to messages. An agent can book the flight, not just tell you how to book it.
vs. Automations: Agents reason about what to do next; automations execute predefined paths. When an unexpected situation arises, an agent adapts; an automation fails or skips.
vs. APIs: APIs are tools an agent uses; they’re not the agent itself. The agent is the reasoning layer that decides which API to call and what to do with the result.
Where the Lines Actually Blur
In practice, real systems often combine these categories:
LLM-powered chatbots with tool access: A customer service chatbot that can look up your order status, initiate a return, and send a confirmation email is starting to look like an agent — it’s taking actions, not just responding. The boundary between “advanced chatbot” and “limited agent” is genuinely fuzzy.
Automations with AI decision steps: A Zapier workflow with an OpenAI or Claude step in the middle isn’t purely rule-based anymore — the AI step can produce variable outputs that affect what the automation does next. This is a hybrid: mostly automation, partly agentic.
Agents with constrained scopes: An agent restricted to a single tool and a narrow task class starts to look like a sophisticated automation. The more constrained the scope, the more the distinction collapses in practice.
The useful question isn’t “what category is this?” but “is this system reasoning about what to do, or executing a predefined path?” That’s the actual distinction that matters for how you build, monitor, and trust it.
Why the Distinction Matters Operationally
Reliability profile: Automations fail predictably — when an edge case hits a path that wasn’t built. Agents fail unpredictably — when their reasoning goes wrong in a way you didn’t anticipate. Different failure modes require different monitoring approaches.
Maintenance overhead: Automations require explicit updates when processes change. Agents adapt to process changes automatically — but may adapt in unexpected ways that need to be caught and corrected.
Auditability: Automations are fully auditable — you can read the workflow and know exactly what it does. Agents are less auditable — you can inspect their actions, but not fully predict them in advance. For compliance-sensitive contexts, this matters significantly.
Build cost: Automations are faster to build for well-defined, stable processes. Agents are faster to deploy when the process is complex, variable, or not fully specified — because you’re specifying a goal rather than a procedure.
Not the version where AI agents are going to replace all human jobs by 2030. The actual version, right now, based on what’s deployed in production.
The Actual Definition
What an AI agent is
Software that takes a goal, breaks it into steps, uses tools to execute those steps, handles errors along the way, and keeps working without you directing every action. The distinguishing characteristic is autonomous multi-step execution — not just answering a question, but completing a task.
The Key Distinction: One-Shot vs. Agentic
Most people’s experience with AI is one-shot: you type something, the AI responds, the exchange is complete. That’s a language model doing inference. An AI agent is different in one specific way: it takes actions, checks results, and takes more actions based on what it found — often dozens of steps — without you approving each one.
Example of one-shot AI: “Summarize this document.” You paste the document, the AI returns a summary. Done.
Example of an AI agent doing the same task: “Research this topic and produce a summary with verified sources.” The agent searches the web, reads multiple pages, identifies conflicts between sources, runs additional searches to resolve them, synthesizes findings, and returns a summary with citations — without you specifying each search query or each page to read. You gave it a goal; it handled the steps.
What Agents Can Actually Do
The tools an agent can use define its capability surface. Common tool categories in production agents:
Web search: Query search engines and retrieve current information
Code execution: Write and run code in a sandboxed environment, use results to inform next steps
File operations: Read, write, and modify files — documents, spreadsheets, data files
API calls: Interact with external services — CRMs, databases, project management tools, communication platforms
Browser control: Navigate web pages, fill forms, extract information
Memory: Store and retrieve information across steps within a session, sometimes across sessions
The combination of these tools is what makes agents capable of genuinely autonomous work. An agent that can search, write code, execute it, check the results, and write findings to a document can complete a research and analysis task that would otherwise require hours of human work — without you steering each step.
What “Autonomous” Actually Means in Practice
Autonomous doesn’t mean unsupervised indefinitely. Production agents are typically configured with:
Defined scope: The tools the agent can use, the systems it can access, the actions it’s allowed to take
Guardrails: Actions that require human confirmation before proceeding — making a payment, sending an email externally, modifying a production database
Reporting: Checkpoints where the agent surfaces what it’s done and asks whether to continue
Autonomy is a dial, not a switch. You set how much the agent handles independently versus checks in. Most production deployments start more supervised and reduce oversight as trust in the agent’s behavior is established.
Real Production Examples (Not Hypotheticals)
Concrete examples from confirmed public deployments as of April 2026:
Rakuten: Deployed five enterprise Claude agents in one week on Anthropic’s Managed Agents platform — handling tasks across their e-commerce operations including data processing, content tasks, and operational workflows
Notion: Background agents that autonomously update workspace pages, synthesize database content, and process meeting notes into structured summaries without manual triggers
Sentry: Agents integrated into developer workflows — monitoring error streams, triaging issues, and surfacing relevant context to engineers
Asana: Project management agents that update task statuses, synthesize project health, and move work items based on defined triggers
These are not pilots. These are production systems handling real operational load.
How They’re Built
An agent is built from three components:
A language model: The reasoning layer — the part that decides what to do next, interprets tool results, and determines when the task is complete
Tools: The action layer — APIs, code execution environments, file systems, or anything else the model can call to take action in the world
Orchestration: The loop that connects them — manages the sequence of model calls and tool executions, maintains state between steps, handles errors
Historically, builders had to construct the orchestration layer themselves — a significant engineering investment. Hosted platforms like Claude Managed Agents handle the orchestration layer, letting builders focus on defining the agent’s goals, tools, and guardrails rather than the mechanics of running the loop.
What Agents Are Not Good At (Yet)
Honest calibration on current limitations:
Long-horizon planning with many unknowns: Agents perform best on tasks with relatively defined scope. Open-ended exploratory work over many days with fundamentally uncertain requirements is still better handled by humans in the loop at each major decision point.
Tasks requiring physical world interaction: No production general-purpose physical agent exists. Software agents operating through APIs and interfaces are the current state.
Tasks where errors are catastrophic: Agents make mistakes. For any irreversible, high-stakes action — financial transactions, production data modifications, external communications to important relationships — human confirmation steps should remain in the loop.
By Will Tygart · Practitioner-grade · From the workbench
Being cited by AI systems is not luck and it’s not purely a domain authority game. There are structural characteristics of content that make AI systems more or less likely to pull from it. Here’s what those characteristics are and how to build them in deliberately.
AI systems — whether Perplexity, ChatGPT with web search, or Google AI Overviews — are trying to answer a question. When they search the web and retrieve candidate content, they’re looking for the passage or page that most directly and reliably answers the query. The content that wins is the content that makes the answer easiest to extract.
This has direct structural implications. A 3,000-word narrative essay that eventually answers a question on page 2 loses to a 600-word page that answers the question in the first paragraph, provides supporting evidence, and includes a definition. Not because shorter is better, but because clarity of answer placement is better.
The Structural Characteristics That Drive Citation
1. Direct Answer in the First 100 Words
Every piece of content you want AI systems to cite should answer the primary question it’s targeting before the first scroll. AI retrieval systems don’t read like humans — they identify the most relevant passage, and that passage needs to contain the answer, not just lead toward it.
Test: take your target query and your first 100 words. Does the answer exist in those 100 words? If not, restructure until it does. The rest of the piece can develop nuance, context, and supporting evidence — but the answer must be front-loaded.
2. Explicit Q&A Formatting
Question-and-answer structure signals to AI systems that the content is explicitly organized around answering queries. H3 headers phrased as questions, followed by direct answers, are one of the most reliable patterns for citation capture.
This is why FAQ sections work — not because of FAQPage schema specifically, but because the underlying structure gives AI systems a clean extraction target. Schema reinforces it; the structure is the foundation.
3. Defined Terms and Named Concepts
Content that defines terms clearly — “X is Y” statements — becomes citable for queries looking for definitions. AI systems frequently answer “what is X” queries by pulling the clearest definition they can find. If your content doesn’t include a crisp definitional sentence, it’s not competing for definition queries even if you’ve written a thorough treatment of the topic.
Add definition boxes. State “AI citation rate is the percentage of sampled AI queries where your domain appears as a cited source.” Don’t bury the definition in the third paragraph of an explanation.
4. Specific, Verifiable Facts
AI systems weight specificity. “$0.08 per session-hour” gets cited. “A relatively modest fee” does not. “60 requests per minute for create endpoints” gets cited. “Limited rate limits apply” does not.
Replace hedged language with concrete numbers and specific claims wherever your content supports it. Don’t fabricate specificity — wrong specific numbers are worse than honest hedging. But wherever you have real, verifiable data, make it explicit and prominent.
5. Entity Clarity
Content that makes clear who is speaking, what organization they represent, and what their basis for authority is gets cited more reliably. This is the E-E-A-T signal applied to AI citation: the system needs to assess whether this source is credible enough to cite.
Name the author. State the organization. Link to primary sources. Include dates on time-sensitive claims (“as of April 2026”). These signals tell the AI system this content has an accountable source, not anonymous text.
6. Freshness on Time-Sensitive Topics
For any topic where recency matters — product pricing, regulatory status, current events — AI systems heavily weight recently indexed, recently updated content. A page published April 2026 beats a page published January 2025 for queries about current status, even if the older page has higher domain authority.
Update time-sensitive content. Add “last updated” dates. Re-publish with fresh timestamps when the underlying facts change. Freshness signals are real citation drivers for volatile topic areas.
7. Speakable and Structured Data Markup
Speakable schema explicitly marks the passages in your content best suited for AI extraction. It’s a direct signal to AI retrieval systems: “this paragraph is the answer.” Combined with FAQPage schema, Article schema, and HowTo schema where relevant, structured markup makes your content more parseable.
Schema doesn’t replace the underlying structure — it reinforces it. A well-structured page with schema beats a poorly structured page with schema. But a well-structured page with schema beats a well-structured page without it.
8. Internal Link Architecture
AI systems that crawl the web assess topical depth partly through link structure. A page that sits within a tight cluster of related pages — all cross-linking around a topic — signals topical authority more strongly than an isolated page, even if the isolated page’s content is comparable.
Build the cluster. The hub-and-spoke architecture is as relevant for AI citation as it is for traditional SEO. Every spoke article should link to the hub; the hub should link to every spoke.
What Doesn’t Work
A few patterns that are intuitively appealing but don’t translate to citation lift:
More content for its own sake: 5,000 words of padded content is not more citable than 900 words of dense, accurate content. AI retrieval is looking for passage quality, not page length.
Keyword density: Traditional keyword repetition strategies don’t make content more citable. The query match is handled at retrieval; the citation decision is about answer quality, not keyword frequency.
Generic authority claims: “We’re the leading experts in X” is not citable. A specific data point that demonstrates expertise is.
The Compound Effect
These characteristics compound. A page with a direct front-loaded answer, Q&A structure, defined terms, specific facts, clear entity signals, fresh timestamps, and schema markup sitting within a well-linked cluster is materially more citable than a page with only two or three of these characteristics. The full stack produces disproportionate results.
You want to monitor whether AI systems are citing your content. What tools actually exist for this, what they do, what they don’t do, and what we’ve built ourselves when nothing on the market fit.
The Market as of April 2026
The AI citation monitoring category is real but nascent. Here’s an honest inventory:
Established SEO Platforms Adding AI Visibility Metrics
Several major SEO platforms have added “AI visibility” or “AI search” modules in the past 6–12 months. These generally track:
Whether your domain appears in AI Overviews for tracked keywords (via SERP scraping)
Brand mentions in AI-generated snippets
Comparative visibility versus competitors in AI search results
Ahrefs, Semrush, and Moz have all moved in this direction to varying degrees. Verify current feature availability — this has been an active development area and capabilities have changed rapidly.
Mention Monitoring Tools Expanding to AI
Brand mention tools like Brand24 and Mention have begun tracking AI-generated content that includes brand references. The challenge: they’re tracking brand name occurrences in crawled content, not necessarily AI citation events. Useful for brand visibility in AI-generated content that gets published, less useful for tracking in-session citations.
Purpose-Built AI Citation Tools (Emerging)
Several purpose-built tools targeting AI citation tracking specifically have launched or raised funding in early 2026. This category is moving fast. As of our last check:
Tools focused on tracking specific brand or entity mentions across AI platforms
API-first tools targeting developers who want to build citation monitoring into their own workflows
Dashboard tools with pre-built query sets for common industry categories
Treat any specific product recommendation here as a starting point for your own research — the category will look different in 6 months.
Google Search Console
The strongest existing tool, and it’s free. AI Overviews that cite your pages register as impressions and clicks in GSC under the relevant queries. This is first-party data from Google itself. Limitation: covers only Google AI Overviews, not Perplexity, ChatGPT, or other platforms.
What We Built
When no existing tool covered the specific workflows we needed, we built our own. The stack:
Perplexity API Query Runner
A Cloud Run service that runs a predefined query set against Perplexity’s API on a weekly schedule. It parses the citations field from each response, checks for domain appearances, and writes results to a BigQuery table. Total engineering time: roughly one day. Ongoing cost: minimal (Cloud Run idle cost + Perplexity API usage).
The output: a weekly BigQuery record per query showing which domains Perplexity cited, with timestamps. Trend queries show citation rate over time by query cluster.
GSC AI Overview Monitor
Not a custom build — just systematic review of GSC data. We check weekly which queries are generating AI Overview impressions for our tracked sites. The signal: if a page is generating AI Overview impressions on new queries, that’s a citation event.
Manual ChatGPT Sampling
For highest-priority queries, manual weekly sampling of ChatGPT with web search enabled. We log results to a shared spreadsheet. Less scalable than the API approach, but ChatGPT’s web search activation is inconsistent enough that API automation adds complexity without proportional reliability gain.
What Doesn’t Exist (That Would Be Useful)
The tool gaps that we still feel:
Cross-platform citation dashboard: A single view showing citation rate across Perplexity, ChatGPT, Gemini, and AI Overviews for the same query set. Nobody has built this cleanly yet.
Historical citation rate database: Knowing your citation rate is useful. Knowing whether it improved after you published a new piece of content is more useful. The temporal correlation is hard to establish with spot-check sampling.
Competitor citation tracking at scale: Easy to check manually for specific queries; hard to monitor systematically across a large competitor set and query space.
These gaps exist because the category is new, not because the problems are technically hard. Expect the tool landscape to fill in significantly over the next 12 months.
AI citation rate is a metric that doesn’t have a standard definition yet, which means everyone using the term might mean something slightly different. Here’s what it is, how to calculate it, and what it actually measures — and doesn’t.
Definition
AI Citation Rate
The percentage of sampled AI queries where a specific domain or URL appears as a cited source in the AI system’s response.
Formula: (Queries where your domain appeared as a source) ÷ (Total queries sampled) × 100
A Concrete Example
You run 50 queries in Perplexity across your core topic cluster. Your domain appears as a cited source in 12 of those responses. Your AI citation rate for that query set on that platform: 12/50 = 24%.
That’s the basic calculation. The complexity is in what you define as your query set, which platforms you sample, and what counts as a “citation.”
What Counts as a Citation
Not all AI source mentions are equal. Some distinctions worth tracking separately:
Direct URL citation: The AI explicitly lists your URL as a source. Highest confidence — trackable programmatically via API.
Domain mention: Your domain name appears in the response text but not necessarily as a formal source citation.
Brand mention: Your brand name appears in the response. May or may not correlate with your web content being the source.
Implied citation: Content clearly derived from your page but no explicit attribution. Only detectable through content fingerprinting — difficult at scale.
For tracking purposes, direct URL citation is the most reliable signal. Brand mentions are noisier but still worth tracking for brand visibility purposes.
How to Calculate It
Step 1: Define Your Query Set
Select 20–100 queries where you want to appear. Good sources for your query set:
Your highest-impression GSC queries (you rank for these — do AI systems cite you?)
Queries where you’ve published dedicated content
Queries from your keyword research that match your expertise
Questions your clients or prospects actually ask
Step 2: Sample Across Platforms
Run each query in Perplexity (most trackable — consistent citation format), ChatGPT with web search enabled, and Google AI Overviews (via organic search). Track results separately by platform — citation rates vary significantly between platforms for the same query set.
Step 3: Log Results
For each query on each platform, record:
Whether your domain appeared as a citation (binary: yes/no)
Position if ranked (first citation, third citation, etc.)
Date of query
Step 4: Calculate Rate
Aggregate by time period (weekly or monthly). Calculate separately by platform and by topic cluster — aggregate rate across all platforms and queries hides the variation that’s actually useful.
Step 5: Establish Baseline, Then Track Change
Your first 4–6 weeks of data sets your baseline. After that, track directional change — is the rate improving, declining, or stable? Correlate changes with content updates, new publications, and competitor activity.
What Citation Rate Actually Measures (And Doesn’t)
AI citation rate is a proxy for content authority signal in AI systems — not a direct ranking factor you can optimize mechanically. It reflects:
Whether your content is being indexed and surfaced by AI systems for your target queries
Whether your content structure and freshness match what AI systems prefer to cite
Relative authority versus competitors for the same query space
It doesn’t measure:
Whether AI systems are using your content without citation (training data influence)
User behavior after AI responses (do they click through to your site?)
Revenue impact of being cited (cited ≠ converting)
Benchmarks and Context
Because this metric is new, industry benchmarks don’t exist yet. What matters is your own trend line, not comparison to a published standard. A 20% citation rate in a highly competitive topic cluster might represent strong performance; 20% in a niche you should dominate might indicate underperformance. Context is everything.
Filed by Will Tygart • Tacoma, WA • Industry Bulletin
With 93% of AI Mode searches ending in zero clicks, the question isn’t whether you rank on Google — it’s whether AI systems consider your content authoritative enough to cite. This interactive tool scores your content across 8 dimensions that LLMs evaluate when deciding what to reference.
We built this based on our research into what makes content citable by Claude, ChatGPT, Gemini, and Perplexity. The factors aren’t what most people expect — it’s not just about keywords or length. It’s about information density, entity clarity, factual specificity, and structural machine-readability.
Take the assessment below to find out if your content is visible to the machines that are increasingly replacing traditional search.
Is AI Citing Your Content? AEO Citation Likelihood Analyzer
Filed by Will Tygart • Tacoma, WA • Industry Bulletin
Most local businesses compete on “best plumber in Austin” or “water damage restoration near me.” But answer engines reward a different kind of content. They want specific, quotable answers to questions that people actually ask. That’s where local AEO wins.
The Local AEO Opportunity Perplexity and Claude don’t just rank businesses by distance and reviews. They rank by citation in answers. If you’re the source Perplexity quotes when answering “how much does water damage restoration cost?”, you get visibility that paid search can’t buy.
And local AEO is less competitive than national. Everyone’s chasing national top 10 rankings. Almost nobody is optimizing for Perplexity citations in local verticals.
The Quotable Answer Strategy AEO content needs to be quotable. That means: – Specific answers (not vague generalities) – Numbers and timeframes (“typically 3-7 days”) – Price ranges (“$2,000-$5,000 for standard water damage”) – Process steps (“Step 1: assessment, Step 2: mitigation…”) – Local context (“in North Texas, humidity speeds drying”)
Generic content doesn’t get quoted. Specific, local, answerable content does.
Content Types That Win in Local AEO Service Cost Guide: “Water Damage Restoration Cost in Austin: What to Expect in 2026” – Actual price ranges in Austin (vs. national average) – Breakdown of what factors affect cost – Comparison of premium vs. budget options – Timeline impact on pricing Result: Ranks in Perplexity for “water damage restoration cost Austin” queries
Process Timeline: “Water Damage Restoration Timeline: Days 1-7, Week 2-3, Month 1” – Specific steps at specific timeframes – Local humidity/climate impact – What happens at each stage – When to expect mold concerns Result: Quoted when people ask “how long does water restoration take”
Problem-Specific Guides: “Hardwood Floor Water Damage: Restoration vs. Replacement Decision” – When to restore vs. replace – Cost comparison – Timeline for each option – Success rates Result: Quoted when people research hardwood floor damage specifically
Local Comparison Content: “Water Damage Restoration in Austin vs. Dallas: Regional Differences” – Climate differences (humidity, soil)r>- Cost differences – Timeline differences – Regional techniques Result: Ranks for “restoration Austin vs Dallas” type queries (people considering both areas)
The Internal Linking Strategy Each content piece links to service pages and other authority content, creating a web:
– Cost guide → Process timeline → Hardwood floor guide → Commercial damage guide → Service page – This signals to Google and Perplexity: “This is an authority cluster on water damage”
The Review Generation Loop AEO content also drives reviews. When a prospect reads your detailed cost breakdown or timeline, they’re more informed. Informed customers become satisfied customers who leave better reviews. Those reviews feed back into Perplexity rankings.
The SEO Bonus Content optimized for AEO also ranks well in Google. In fact, the AEO content pieces often outrank the local Google Business Profile for specific queries. You’re getting: – Google rankings (organic traffic) – Perplexity citations (AI engine traffic) – LinkedIn potential (if you share the content as thought leadership) – Social proof (highly cited content builds reputation)
Real Results A local restoration client published: – “Water Damage Restoration Timeline” (2,500 words, specific local context) – “Cost Guide for Water Damage in Austin” (detailed breakdown) – “How We Assess Your Home for Water Damage” (process guide)
Results (after 3 months): – Perplexity citations: 40+ per month – Google organic traffic: 2,200 monthly visitors – Phone calls from people who found the guide: 15-20/month – Average deal value: $4,500 (because informed customers are better quality)
Why Competitors Aren’t Doing This – It takes 40-60 hours per content piece (slower than quick blog posts) – Requires local expertise (can’t outsource easily) – Doesn’t show results in analytics for 2-3 months – Requires understanding AEO principles (most agencies focus on SEO) – Most content agencies haven’t heard of AEO yet
The Competitive Window We’re in a narrow window right now (2026) where local AEO is underdeveloped. In 12-18 months, everyone will be doing it. If you start now with detailed, quotable, local-specific content, you’ll be entrenched before competition arrives.
How to Start 1. Pick your top 3 search queries (“water damage cost,” “timeline,” “hardwood floors”) 2. Write 2,500+ word guides that are specifically local and quotable 3. Add FAQPage schema markup so Perplexity can pull Q&A pairs 4. Internal link across your pieces 5. Wait 3-4 weeks for Perplexity to crawl and cite 6. Iterate based on which pieces get cited most
The Takeaway Local businesses can compete on AEO with fraction of the budget that national companies spend on paid search. But you need specific, quotable, local-relevant content. Generic blog posts won’t get you there. Deep, detailed, answerable guides will.
{
“@context”: “https://schema.org”,
“@type”: “Article”,
“headline”: “AEO for Local Businesses: Featured Snippets Your Competitors Arent Chasing”,
“description”: “Local AEO wins by publishing specific, quotable answers to local questions. Here’s how to build content that Perplexity cites instead of competing on loca”,
“datePublished”: “2026-03-30”,
“dateModified”: “2026-04-03”,
“author”: {
“@type”: “Person”,
“name”: “Will Tygart”,
“url”: “https://tygartmedia.com/about”
},
“publisher”: {
“@type”: “Organization”,
“name”: “Tygart Media”,
“url”: “https://tygartmedia.com”,
“logo”: {
“@type”: “ImageObject”,
“url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
}
},
“mainEntityOfPage”: {
“@type”: “WebPage”,
“@id”: “https://tygartmedia.com/aeo-for-local-businesses-featured-snippets-your-competitors-arent-chasing/”
}
}