Part 2 of 2. In the first post I showed that Claude, ChatGPT, Perplexity, Copilot, Gemini, NotebookLM, and Kagi collectively sent tygartmedia.com at least 94 new readers in 29 days — and that Claude alone is our #4 traffic source. That is the headline. What follows is the interesting part: when you filter the landing-page report one AI model at a time, the three major assistants cite completely different kinds of pages, and the pattern is actionable.
Claude cites a small number of pages, a lot of times
Claude.ai sent 79 sessions across 63 users to 16 distinct pages. Two pages ate more than half of it:
| # | Page | Sessions | % of Claude traffic | Avg Time |
|---|---|---|---|---|
| 1 | /claude-student-discount | 22 | 27.9% | 35s |
| 2 | /anthropic-console | 21 | 26.6% | 11s |
| 3 | (not set) | 13 | 16.5% | 5s |
| 4 | /claude-edu | 4 | 5.1% | 6s |
| 5 | /claude-pro-vs-chatgpt-plus | 4 | 5.1% | 7s |
| 6 | /claude-code-on-vertex-ai-gcp | 3 | 3.8% | 3s |
| 7 | /claude-desktop | 2 | 2.5% | 40s |
| 8 | /how-to-install-claude-code | 2 | 2.5% | 2s |
| 9 | /claude-4-deprecation | 1 | 1.3% | 1m 07s |
| 10 | /claude-managed-agents-pricing-cost-analysis | 1 | 1.3% | 1m 38s |
The two biggest pages, /claude-student-discount and /anthropic-console, are 54.5% of all Claude-referred traffic to the site. Those are extremely specific query shapes — “how do students get Claude Pro free” and “how do I access the Anthropic Console” — and Claude has apparently decided our pages are the canonical answer for both.
The engagement twist is worth staring at. The two biggest Claude-referred pages have the worst time-on-page: 35 seconds and 11 seconds. The two pages that got a single Claude visit each — /claude-managed-agents-pricing-cost-analysis and /claude-4-deprecation — got 1 minute 38 seconds and 1 minute 7 seconds of real read time. The pattern is clean. When Claude can extract the answer directly into its chat window, users click through briefly to verify and leave. When the answer is deeper than Claude can summarize, readers stay to actually read. Both behaviors are valuable and both are measurable.
ChatGPT cites broadly, favors “X vs Y” content, and (oddly) sends geographic traffic
ChatGPT’s footprint is shaped differently. 16 sessions across 14 users to 13 distinct pages — almost every page received exactly one visit, which is the signature of a model citing a wide range of sources once each rather than reaching for a favorite.
| Page | Sessions | Avg Time |
|---|---|---|
| /claude-student-discount | 3 | 15s |
| /claude-computer-use-tutorial | 1 | 2m 07s |
| /grok-vs-claude | 1 | 15s |
| /opus-4-7-vs-gpt-5-4-vs-gemini-3-1-pro | 1 | 0s |
| /claude-pro-vs-chatgpt-plus | (cross-model) | — |
| /claude-for-nonprofits | 1 | 30s |
| /everett-waterfront-visitor-guide… | 1 | 0s |
| /hood-canal-shellfish-season-2026… | 1 | 0s |
| /rakuten-claude-managed-agents-enterprise-deployment | 1 | 0s |
Two patterns in that list. First, ChatGPT appears to cite us disproportionately for model comparisons — grok-vs-claude, opus-4-7-vs-gpt-5-4-vs-gemini-3-1-pro, and the cross-model claude-pro-vs-chatgpt-plus page. Second, and stranger, ChatGPT sent visits to two hyperlocal Pacific Northwest pages: an Everett waterfront guide and a Hood Canal shellfish season page. That is ChatGPT using our site as a reference source for geographic queries, which is not a pattern any other model shows.
The hidden gem: /claude-computer-use-tutorial received one ChatGPT referral and that referral stayed for 2 minutes 7 seconds. ChatGPT appears willing to cite long-form technical tutorials in a way Claude does not.
Perplexity treats us like a research database
Perplexity sent 12 sessions across 10 users to 9 pages — the most evenly distributed of the three and the only model that cites people, founders, and company-history content.
| Page | Sessions | Avg Time |
|---|---|---|
| /anthropic-founders-2 | 2 | 17s |
| /claude-code-on-vertex-ai-gcp | 2 | 54s |
| /claude-student-discount | 2 | 0s |
| /claude-desktop | 1 | 4s |
| /claude-team-plan | 1 | 0s |
| /how-to-install-claude-code | 1 | 0s |
| /restoration-team-training-claude-cowork | 1 | 0s |
Perplexity is the only model that pulled visits on /anthropic-founders-2, which implies Perplexity is fielding a different query shape — something closer to “who founded Anthropic” than “how do I use Claude.” Perplexity is also the only model that surfaced the very niche B2B page /restoration-team-training-claude-cowork. That is a long-tail, vertical-specific query and Perplexity cited us as the source. That is exactly the behavior you would hope for from a research-flavored assistant.
The three models have completely different citation personalities
Once you lay the three patterns side by side, the strategy falls out of the page.
- Claude.ai favors short, factual, access-related pages. Product info, pricing, how-to-access. If you want more Claude citations, write more narrow “how do I do this one specific thing” pages.
- ChatGPT favors comparisons and long-tail references. X vs Y, alternatives, and — unexpectedly — some geographic content. If you want more ChatGPT citations, write more “X vs Y” posts with tight comparison tables.
- Perplexity favors people, history, and niche research. Founders, company background, domain-specific tutorials. If you want more Perplexity citations, write more research-flavored background pieces.
This is the single most practical insight in the data set. Most people talk about “AI SEO” as if it is one thing. It is three things, at minimum, and the content shape that wins one model will not automatically win the other two.
The crown jewel: one page, 17% of all AI-referred traffic
The clearest cross-model winner on the site is /claude-student-discount. Claude sent 22 sessions. ChatGPT sent 3. Perplexity sent 2. Combined that is 27 sessions — roughly 17% of all AI-referred traffic we received in 29 days, from a single URL. No other page on the site is cited by all three major LLMs in meaningful volume.
There is a playbook inside that one data point. The page works because the query “how do I get Claude for free as a student” is an extremely high-frequency question across every chat surface, and the page happens to be structured the way LLMs like to cite: a short, direct answer near the top, specific eligibility rules in a scannable block, and no wall of context before the reader gets to the fact. That structural recipe — front-load the answer, make the facts liftable, keep the page narrow — is repeatable.
The bigger finding: 90% of our Claude content is invisible to AI
tygartmedia.com has more than 250 Claude-related articles. Exactly 25 of them show up in the AI-referral data set at all. The 90% that do not get cited are not low-quality — several of them have strong engagement from regular search traffic:
/claude-managed-agents-complete-pricing-guide-2026— 17 sessions at ~1 minute from search, zero AI citations/notion-knowledge-base-for-claude— 10 sessions at 1m 23s, uncited/claude-rate-limits— classic FAQ shape, 6 sessions, not cited/claude-md-playbook— 1 session at 2m 33s, zero AI pickup- The full
/claude-cowork-*family of 12+ pages, almost entirely invisible to every model
The difference between an AI-cited page and an AI-invisible page is rarely the quality of the content. It is the shape. Pages that get cited have an early summary, short headings, bulleted facts, and a quotable direct-answer sentence. Pages that do not get cited tend to open with context, build up to the answer, and bury the quotable line in paragraph 9.
The content-cluster scorecard
| Cluster | Approx. Pages | Approx. Sessions | Engagement | AI Citations |
|---|---|---|---|---|
| Claude pricing & access | ~10 | ~160 | Mixed | High |
| Claude managed agents | ~12 | ~130 | Strong (25s–1m) | Low |
| Claude Code | ~8 | ~60 | High (18s–3m) | Moderate |
| Model comparisons (X vs Y) | ~10 | ~45 | Very high (1–7 min) | Moderate |
| Anthropic people/company | ~8 | ~30 | Medium | Moderate |
| Claude how-to / tutorials | ~20 | ~50 | Medium | Low |
| Claude Cowork family | ~15 | ~40 | Very low (0–10s) | Almost none |
Two clusters deserve action. The Claude Cowork family is a content swamp — 15 pages, low traffic, no AI citations, and 0–10 second engagement on the traffic that does land. That cluster should be consolidated into two or three flagship posts and the rest redirected. The model comparisons cluster is the opposite: low volume but 1–7 minutes of engagement and cross-model citations. One well-researched comparison post outperforms ten mediocre explainers on every metric that matters here.
The playbook, in one list
- Write more narrow single-answer pages. Candidates I would ship next:
/claude-web-search,/claude-api-keys,/claude-max-plan-vs-pro,/how-to-cancel-claude,/claude-mobile-app,/claude-desktop-vs-web,/claude-subscription-refund. Each is ~600 words, answer-first, scannable. That is the shape Claude cites. - Add a Quick Answer block to the top of every long-form piece. Two or three sentences. Quotable. That alone moves a real share of our invisible content into AI-citation range.
- Invest in comparison posts for ChatGPT pickup. We already know ChatGPT cites our existing X-vs-Y content. Ship more of them, with tight tables.
- Write more founder/history/background pieces for Perplexity pickup. Research-flavored. Dates, names, primary sources.
- Consolidate the Cowork cluster. Two or three flagship pages, everything else redirected.
- Ship a permanent AI-Referral dashboard in GA4. Segment on all seven assistant domains. Watch it weekly. This is now a first-class channel.
Frequently asked questions
What kinds of pages does Claude.ai cite most often?
Based on the tygartmedia.com data, Claude.ai disproportionately cites short, factual, access-related pages — product info, pricing, how-to-access, and eligibility details. On our site, two pages (/claude-student-discount and /anthropic-console) accounted for 54.5% of all Claude-referred traffic in a 29-day window.
What kinds of pages does ChatGPT cite most often?
ChatGPT’s citation pattern favors comparison and long-tail reference pages — “X vs Y” posts like Grok vs Claude, model-to-model comparisons, and, surprisingly, some geographic and local content. ChatGPT tends to cite many pages once each rather than concentrating on a small set.
What kinds of pages does Perplexity cite most often?
Perplexity cites research-flavored content — founders and company history, domain-specific tutorials, and niche B2B pages. It is the only major AI assistant that sent traffic to our Anthropic founders page and to a vertical-specific training page in our data set.
Why does the same page get different citation volume from different AI models?
Because each assistant is answering a slightly different distribution of queries. Claude is most often used for “how do I use this product” questions and favors narrow how-to pages. ChatGPT receives more comparison and alternative-seeking queries. Perplexity skews toward research and background questions. A page that is the best answer for one query type will not automatically be the best answer for another.
How do I structure a page to get cited by AI assistants?
Lead with a direct, quotable answer in the first paragraph. Use short scannable headings. Keep facts in bulleted or tabular form. Include an explicit FAQ block with question-shaped subheadings. Keep the page narrow — one topic, one canonical answer — rather than a sprawling multi-topic explainer.
The bigger picture
The meta-insight worth sitting with: we are currently being cited inside Claude’s internal answer graph for “Claude student discount” because a human sat down and wrote a clear, narrow page about it. That is almost the entire game for publishers for the next three years. Most of the web has not noticed yet. We noticed, and now we have a measurement stack to act on what we noticed.
If you are a publisher, the thing to do this week is boring and powerful: segment your GA4 on the seven AI-assistant domains from Part 1, sort your landing pages by AI-referral volume, and look at the pages that are winning. They will have a shape. Copy it.
— If you missed it, Part 1 is here.
Leave a Reply