Chris Olah is one of the most unusual figures in AI research: a Thiel Fellow who never completed a university degree, yet became one of the field’s most respected researchers. He pioneered AI interpretability research — the science of understanding what’s actually happening inside neural networks — and now continues that work at Anthropic, the company he co-founded. Forbes estimates his net worth at approximately $1.2 billion.
Background: Thiel Fellowship and Unconventional Path
Olah received a Thiel Fellowship — the $100,000 grant from Peter Thiel’s foundation that pays promising young people to skip or leave college and pursue their projects. The fellowship is notoriously selective and has been awarded to several founders and researchers who went on to have outsized impact. In Olah’s case, it enabled him to pursue AI research full-time before the field had matured into its current form.
He has no university degree of any kind — a remarkable fact in a field where PhDs are nearly universal among top researchers. His credentials come entirely from his published work, which speaks for itself.
Founding Distill: A New Kind of AI Publication
Olah co-founded Distill, an online journal dedicated to clear, visual, interactive explanations of machine learning research. Distill pioneered the idea that AI research could be communicated through interactive visualizations and careful writing — not just equations in PDFs. The journal won a Science Communication Award and influenced how a generation of researchers think about explaining their work.
Pioneering Interpretability Research
Olah’s most important scientific contribution is the development of neural network interpretability as a rigorous research area. Before his work, AI models were widely treated as inscrutable black boxes: you could measure their outputs, but understanding why they produced those outputs was thought to be essentially impossible.
Working across Google Brain, OpenAI, and now Anthropic, Olah developed techniques for understanding what individual neurons and circuits inside neural networks are doing — what features they detect, how they interact, and how they contribute to model behavior. This work has direct implications for AI safety: if you can understand what’s happening inside a model, you have a better chance of identifying and fixing problematic behaviors.
His research on “circuits” — the functional modules within neural networks — and on “superposition” — how models pack multiple concepts into single neurons — has opened entirely new lines of inquiry in the field.
Career Path: Google Brain → OpenAI → Anthropic
Olah’s research career moved through the major AI labs of the past decade: Google Brain, then OpenAI, then to Anthropic as a co-founder. At each stop, he continued his interpretability work, building on previous findings and training a generation of collaborators in the techniques he developed.
At Anthropic: Leading Interpretability Research
At Anthropic, Olah leads the interpretability research team — one of the company’s highest-priority research areas and a direct expression of Anthropic’s safety mission. The goal is to build the scientific foundation for understanding frontier AI models well enough to verify their alignment with human values, not just measure their outputs.
Net Worth
Forbes estimated Olah’s net worth at approximately $1.2 billion as of 2026, reflecting his co-founder equity stake in Anthropic. The figure reflects both his founding role and the enormous growth in Anthropic’s valuation since 2021.
Frequently Asked Questions
Does Chris Olah have a university degree?
No. Chris Olah is a Thiel Fellow who did not complete a university degree. He is one of the rare examples of a top AI researcher whose credentials come entirely from his published research rather than academic credentials.
What is Chris Olah known for?
Olah is known for pioneering AI interpretability research — the scientific study of what’s happening inside neural networks. He co-founded the Distill journal and developed foundational techniques for understanding neural network circuits and features.
What is Chris Olah’s net worth?
Forbes estimated approximately $1.2 billion as of 2026, based on his co-founder equity stake in Anthropic.
Jared Kaplan is the Chief Science Officer of Anthropic and one of the most consequential AI researchers alive. His 2020 paper on neural scaling laws — co-authored with Sam McCandlish and others — changed how every major AI lab thinks about model development. He is a TIME100 AI honoree, has testified before the U.S. Senate, and Forbes estimates his net worth at $3.7 billion. Yet outside of AI research circles, his name remains largely unknown to the general public.
Academic Background
Kaplan holds a PhD in physics, having trained as a theoretical physicist before pivoting to AI. Like several Anthropic co-founders, his physics background proved directly applicable to machine learning — particularly in developing the mathematical frameworks for understanding how AI systems scale. Physics training emphasizes finding simple underlying laws that explain complex phenomena, which is exactly what scaling law research does.
The Discovery That Changed AI: Scaling Laws
In January 2020, Kaplan and colleagues at OpenAI published “Scaling Laws for Neural Language Models” — a paper that demonstrated something remarkable: AI model performance improves in a smooth, predictable way as you increase model size, training data, and compute budget. The relationship follows a power law, meaning you can forecast how capable a model will be before training it, simply by knowing how much compute you’re using.
This was not merely an academic finding. It gave AI labs a roadmap: if you want a more capable model, you know roughly how much more investment is required. It directly enabled the aggressive scaling strategies that produced GPT-4, Claude 3, and every frontier model since. The paper has been cited tens of thousands of times and is considered foundational to the modern AI race.
Co-Founding Anthropic
Kaplan was among the seven OpenAI researchers who left in 2021 to found Anthropic. His technical authority — particularly in understanding what training configurations produce which capabilities — made him a natural fit as Chief Science Officer, the role he holds today.
Recognition and Public Profile
Kaplan was named to TIME’s 100 Most Influential People in AI, one of a handful of researchers recognized for foundational contributions rather than executive roles. He has testified before the U.S. Senate on AI safety and capabilities — bringing the technical perspective of a researcher who understands, at a mathematical level, how AI systems grow in power.
Net Worth
Forbes estimated Kaplan’s net worth at approximately $3.7 billion as of early 2026, reflecting his co-founder equity in Anthropic at the company’s current valuation. If Anthropic proceeds with its targeted IPO in late 2026, this figure could change substantially.
Frequently Asked Questions
What is Jared Kaplan known for?
Jared Kaplan is best known for co-discovering AI scaling laws — the mathematical relationships that predict how AI model performance improves with more compute, data, and parameters. His 2020 paper “Scaling Laws for Neural Language Models” is foundational to modern AI development.
What is Jared Kaplan’s role at Anthropic?
Kaplan is the Chief Science Officer of Anthropic, responsible for the company’s scientific research direction and the technical foundations of Claude’s development.
What is Jared Kaplan’s net worth?
Forbes estimated Jared Kaplan’s net worth at approximately $3.7 billion as of early 2026, based on his co-founder equity stake in Anthropic.
Benjamin Mann is a co-founder of Anthropic and co-head of Anthropic Labs, the research division responsible for Claude’s most advanced capabilities. His path to one of the most consequential AI roles in the world ran through Columbia University, Google, and OpenAI — and yet, as of 2026, virtually no public biography of him exists. This profile fills that gap.
Education: Columbia University
Benjamin Mann studied computer science at Columbia University in New York City, graduating with a strong foundation in systems and algorithms. Columbia’s CS program has produced a notable number of AI researchers and startup founders, and Mann followed that tradition directly into product engineering and research roles.
At Google: Waze Carpool
After Columbia, Mann worked at Google as a senior engineer, where he contributed to Waze Carpool — Google’s carpooling feature built on top of the Waze navigation platform. The work gave him experience operating at massive scale and shipping consumer-facing products with millions of users. It also represented a departure from pure research: Mann has always moved between applied engineering and fundamental AI work.
At OpenAI: Architecting GPT-3
Mann joined OpenAI and became one of the core engineers behind GPT-3 — the 175-billion parameter language model that launched the modern AI era when it was released in 2020. While Tom Brown served as lead engineer, Mann was a key contributor to the architecture and training infrastructure that made GPT-3 possible. He is listed as a co-author on the landmark paper “Language Models are Few-Shot Learners.”
Co-Founding Anthropic
In 2021, Mann joined Dario Amodei, Daniela Amodei, and five other OpenAI researchers in founding Anthropic. The co-founders shared a commitment to building AI that is safe, interpretable, and beneficial — and a belief that a dedicated safety-focused lab was necessary to pursue that goal seriously.
Role at Anthropic: Co-Leading Anthropic Labs
Mann co-leads Anthropic Labs alongside Mike Krieger, the Instagram co-founder who joined Anthropic in 2023. Anthropic Labs serves as the research and experimentation arm of the company — the team responsible for exploring Claude’s frontier capabilities, running novel experiments, and developing the next generation of features before they ship to users.
The pairing of Mann (deep AI research background) with Krieger (consumer product expertise at scale) reflects Anthropic’s increasing emphasis on making frontier AI research accessible and useful to everyday users, not just researchers and developers.
Public Profile and Media
Mann appeared on Lenny’s Podcast in July 2025, one of the rare public interviews he has given. The episode generated significant interest in the AI research community, touching on Anthropic’s product philosophy, the future of AI assistants, and the practical challenges of building systems that are both powerful and safe. Despite this, he remains one of the least-profiled founders of a major AI company.
Frequently Asked Questions
What is Benjamin Mann’s role at Anthropic?
Benjamin Mann co-leads Anthropic Labs alongside Mike Krieger. Anthropic Labs is the research and experimentation division responsible for Claude’s frontier capabilities.
Where did Benjamin Mann work before Anthropic?
Mann worked at Google (on Waze Carpool) and OpenAI (as a core engineer on GPT-3) before co-founding Anthropic in 2021.
Did Benjamin Mann work on GPT-3?
Yes. Mann was a key architect and contributor to GPT-3 at OpenAI, and is a co-author on the landmark paper “Language Models are Few-Shot Learners.”
Sam McCandlish is the Chief Technology Officer and Chief Architect of Anthropic, the AI safety company behind Claude. Before helping build one of the most important AI companies in the world, he was a theoretical physicist studying complex systems. His journey from physics to AI is one of the more unusual and compelling founding stories in Silicon Valley — and as of 2026, no dedicated biography of him exists anywhere online.
Academic Background: Theoretical Physics
McCandlish earned his PhD in theoretical physics from Stanford University, where he specialized in the mathematics of complex systems — how large numbers of interacting components give rise to emergent behaviors. After Stanford, he completed a postdoctoral fellowship at Boston University, continuing his work in theoretical physics before pivoting to machine learning research.
The leap from physics to AI is less dramatic than it appears. Theoretical physicists are trained in the same mathematical frameworks — statistical mechanics, dynamical systems, information theory — that underlie modern machine learning. Many of the most important AI researchers of the past decade came from physics backgrounds.
At OpenAI: Discovering Scaling Laws
McCandlish joined OpenAI as a researcher and quickly became interested in a fundamental question: how does AI model performance scale with compute, data, and parameters? The answer would have enormous practical implications for how AI companies allocate research budgets and design training runs.
Working alongside Jared Kaplan (now Anthropic’s Chief Science Officer) and others, McCandlish co-authored the 2020 paper “Scaling Laws for Neural Language Models” — arguably the most practically important paper published in AI in the last decade. The paper demonstrated that AI performance improves predictably and smoothly as models get larger, datasets get bigger, and compute budgets increase. This insight transformed how AI labs plan and prioritize research.
Co-Founding Anthropic
In 2021, McCandlish joined six other OpenAI researchers — including Dario Amodei, Daniela Amodei, Jared Kaplan, Chris Olah, Tom Brown, and Jack Clark — in founding Anthropic. The group shared concerns about the safety implications of increasingly powerful AI systems and believed that a dedicated safety-focused lab was needed.
Role at Anthropic: CTO and Chief Architect
As CTO and Chief Architect, McCandlish is responsible for Anthropic’s technical direction — the architecture decisions, training methodologies, and infrastructure choices that determine what Claude can do and how efficiently it can be trained. His physics background gives him an unusual ability to reason about scaling and complexity at the systems level.
Net Worth and Equity
Forbes has estimated McCandlish’s net worth at approximately $3.7 billion as of early 2026, reflecting his co-founder equity stake in Anthropic at its current valuation. As Anthropic moves toward a potential IPO (targeting 2026), those figures could shift substantially.
Frequently Asked Questions
What is Sam McCandlish’s background?
Sam McCandlish has a PhD in theoretical physics from Stanford University and completed a postdoctoral fellowship at Boston University before pivoting to AI research.
What is Sam McCandlish’s role at Anthropic?
McCandlish is the Chief Technology Officer (CTO) and Chief Architect of Anthropic, responsible for the company’s technical direction and AI architecture decisions.
What research is Sam McCandlish known for?
McCandlish co-authored the landmark 2020 paper “Scaling Laws for Neural Language Models,” which demonstrated that AI performance improves predictably with scale and transformed how AI labs plan research.
Tom Brown is one of seven co-founders of Anthropic and the engineer most responsible for making GPT-3 a reality. His trajectory — MIT graduate, YC founder, OpenAI research lead, Anthropic co-founder — traces the arc of the modern AI industry itself. Yet as of 2026, no Wikipedia page exists for him, and no dedicated biography has been published anywhere on the internet. This profile aims to change that.
Early Life and Education
Tom Brown earned a Master of Engineering from the Massachusetts Institute of Technology, studying at the intersection of computer science and brain/cognitive sciences. This dual focus — computational systems and human cognition — would later prove formative in his approach to large language model design.
Before OpenAI: Co-Founding Grouper
Before entering the AI research world full-time, Brown co-founded Grouper, a social networking startup that went through Y Combinator (YC). Grouper connected strangers for group social outings — an early experiment in algorithmically-mediated human connection. The startup experience gave Brown practical exposure to building products at speed, a skill that would prove valuable in AI research environments.
At OpenAI: Leading GPT-3 Engineering
Brown joined OpenAI as a research scientist and quickly became central to the organization’s most ambitious project: building a language model large enough to demonstrate emergent general intelligence. He served as the lead engineer on GPT-3, the 175-billion parameter model that, when released in 2020, fundamentally changed the world’s understanding of what AI could do.
GPT-3 was the first AI model to reliably produce human-quality prose, write working code, translate languages, and answer questions — all from a single model, with no task-specific training. The technical paper describing GPT-3, “Language Models are Few-Shot Learners,” listed Brown as the lead author. It has been cited over 60,000 times and remains one of the most influential papers in the history of machine learning.
Leaving OpenAI: The Anthropic Founding
In 2021, Brown was among seven senior OpenAI researchers who left to co-found Anthropic alongside Dario Amodei (CEO), Daniela Amodei (President), Jared Kaplan, Chris Olah, Sam McCandlish, and Jack Clark. The departure was motivated in part by disagreements about how quickly OpenAI was commercializing its technology relative to its safety research — concerns that have only grown more prominent as the AI industry has accelerated.
Anthropic was incorporated as a public benefit corporation (PBC), a legal structure that formally embeds the mission of responsible AI development into the company’s governing documents.
Role at Anthropic: Head of Core Resources
At Anthropic, Brown leads Core Resources — the team responsible for the fundamental infrastructure, compute, and technical operations that make Claude’s training possible. In an AI company, compute is the most critical resource: access to sufficient GPU clusters determines what models can be trained and how quickly. Brown’s role sits at the intersection of infrastructure engineering and research operations.
Anthropic’s Growth and Valuation
Since its founding, Anthropic has raised billions from investors including Google, Amazon, Spark Capital, and others, reaching a valuation of approximately $61 billion as of early 2026. Claude — Anthropic’s AI assistant — has become one of the most widely used AI tools in the world, particularly among developers and enterprise users. As a co-founder, Brown holds a meaningful equity stake in the company.
Frequently Asked Questions
Where did Tom Brown go to school?
Tom Brown earned an M.Eng from MIT in computer science and brain/cognitive sciences.
What is Tom Brown’s role at Anthropic?
Tom Brown leads Core Resources at Anthropic — the team responsible for compute infrastructure and technical operations supporting Claude’s training.
Did Tom Brown work at OpenAI?
Yes. Brown was a research scientist at OpenAI and served as the lead engineer on GPT-3, the 175B parameter model released in 2020. He is the lead author on the foundational GPT-3 paper “Language Models are Few-Shot Learners.”
Why did Tom Brown leave OpenAI?
Brown, along with six other OpenAI researchers, co-founded Anthropic in 2021 due to concerns about the pace of AI commercialization relative to safety research.
Claude AI is a family of large language models built by Anthropic, a San Francisco-based AI safety company. In 2026, Claude competes directly with ChatGPT, Gemini, and Grok — and in many professional use cases, it outperforms all of them. This guide covers what Claude is, how it works, what it costs, and how to start using it today.
What Is Claude AI?
Claude is an AI assistant developed by Anthropic, a company founded in 2021 by former OpenAI researchers including Dario Amodei, Daniela Amodei, and five other co-founders. The name “Claude” is a nod to Claude Shannon, the father of information theory.
Unlike some AI tools built primarily for speed or image generation, Claude was designed from the ground up with safety and helpfulness as co-equal priorities. Anthropic uses a technique called Constitutional AI — a method of training models to follow a set of principles rather than just optimize for user approval. The result is an assistant that tends to be more careful, more honest, and less likely to hallucinate than its competitors.
As of April 2026, Claude is available through:
Claude.ai — the web and mobile interface (free and paid plans)
Claude desktop app — native Mac and Windows applications
Claude API — for developers building AI-powered applications
Claude Code — a terminal-native AI coding tool
Enterprise deployments — via Anthropic’s enterprise and team offerings
Which Claude Models Exist in 2026?
Anthropic currently offers three tiers of Claude models, each optimized for different use cases:
Model
Best For
Context Window
Notable Benchmark
Claude Opus 4.6
Complex reasoning, research, coding
200K tokens
80.8% SWE-bench, 91.3% GPQA Diamond
Claude Sonnet 4.6
Everyday tasks, balanced performance
200K tokens
Best speed-to-intelligence ratio
Claude Haiku 4.5
Fast, lightweight tasks
200K tokens
Fastest response time
All models support a 200,000-token context window by default — roughly 150,000 words, or an entire novel. Enterprise customers can access up to 500,000 tokens, and Claude Code extends to 1 million tokens for large codebase analysis.
How Does Claude AI Work?
Claude is a large language model (LLM) — a type of neural network trained on vast amounts of text data to predict and generate human-like responses. What distinguishes Claude from other LLMs is Anthropic’s emphasis on alignment and safety during training.
Claude uses two key training innovations:
Constitutional AI (CAI): Instead of relying solely on human feedback to shape model behavior, Anthropic trains Claude to evaluate its own outputs against a set of written principles. This makes Claude more consistent in avoiding harmful outputs, even in edge cases human reviewers might not anticipate.
RLHF (Reinforcement Learning from Human Feedback): Human trainers rate Claude’s responses, and those ratings guide the model toward more helpful, accurate, and appropriate answers over time.
The combination produces a model that tends to acknowledge uncertainty, push back on false premises, and decline harmful requests more gracefully than many competitors.
What Can Claude AI Do?
Claude’s capabilities in 2026 span well beyond simple chatting. Here’s what it handles well:
Writing and Editing
Claude excels at long-form content: blog posts, essays, reports, marketing copy, email sequences, legal documents, and fiction. Its writing is notably less robotic than many AI tools, partly because it’s trained to match tone and style from context clues.
Coding and Software Development
Claude Code — Anthropic’s terminal-native coding tool — has become one of the most popular AI coding environments among professional developers. It can write, debug, refactor, and explain code across virtually all major programming languages, and it understands large codebases through its million-token context window.
Research and Analysis
Claude reads and synthesizes PDFs, research papers, financial reports, and legal filings. With 200K tokens of context, it can process an entire book-length document and answer specific questions about it.
Data Analysis
Claude can read CSV files, interpret charts, write Python or SQL to analyze datasets, and explain findings in plain language — making it useful for anyone who works with data but isn’t a dedicated data scientist.
Multimodal Inputs
Claude accepts text, images, PDFs, and documents as inputs. It can describe images, extract text from screenshots, and analyze visual data — though it cannot generate images itself (for image generation, tools like Midjourney or DALL-E are required).
Claude AI Pricing: Free vs. Paid Plans in 2026
Anthropic offers four main tiers for individual users:
Plan
Price
What You Get
Best For
Free
$0/month
Limited daily messages, Claude Sonnet access
Casual or occasional use
Claude Pro
$20/month
5x more usage, priority access, Projects
Regular users, professionals
Claude Max 5x
$100/month
5x Pro usage, Claude Code access, extended thinking
Power users, developers
Claude Max 20x
$200/month
20x Pro usage, highest priority
Heavy professional use
Enterprise plans are available with custom pricing, SSO, admin controls, extended context (up to 500K tokens), and zero-data-retention options for sensitive industries.
Claude vs. ChatGPT: What’s the Difference?
This is the question most people ask when they first hear about Claude. The honest answer: they’re both capable, and the best choice depends on your use case.
Factor
Claude
ChatGPT
Best at
Long documents, nuanced writing, coding
General tasks, image generation, plugins
Context window
200K tokens (standard)
128K tokens (GPT-4o)
Image generation
No (analysis only)
Yes (DALL-E integration)
Safety emphasis
Very high (Constitutional AI)
High
Code quality
Among the best (SWE-bench leader)
Strong
Price
$20-$200/month
$20/month (Plus), $200/month (Pro)
For most professional writing, legal/financial analysis, and software development tasks, Claude holds a measurable edge. For tasks requiring image generation or deep integration with third-party plugins, ChatGPT’s ecosystem is broader.
How to Get Started with Claude AI
Getting started takes about two minutes:
Go to claude.ai and create a free account with your email or Google login.
Start a new conversation. Type or paste your first prompt.
If you need to analyze a document, click the paperclip icon to upload PDFs, images, or files.
For power use, upgrade to Claude Pro for Projects — a feature that lets you create persistent knowledge bases that Claude remembers across conversations.
If you’re a developer, visit console.anthropic.com to get your API key and explore the Claude API.
Claude AI: Key Limitations to Know
No tool is perfect. Here are Claude’s genuine limitations as of 2026:
No image generation: Claude cannot create images. For that, you need a dedicated tool like Midjourney, DALL-E, or Stable Diffusion.
Rate limits on free and Pro plans: Heavy users — especially on the Pro tier — regularly hit daily message limits. This is the most common complaint among power users. The Max plans ($100/$200/month) solve this for most use cases.
No real-time web access by default: Unless explicitly connected to a web search tool, Claude’s knowledge has a training cutoff. It cannot browse the web in real time by default on the consumer interface.
Occasional refusals: Claude’s safety training sometimes makes it overly cautious on topics that are legitimate but touch sensitive areas. This has improved substantially with each model generation.
Frequently Asked Questions About Claude AI
Is Claude AI free?
Yes — Claude has a free tier that gives you limited daily access to Claude Sonnet. The free tier is useful for casual use, but heavy users will quickly encounter rate limits. Paid plans start at $20/month.
Who made Claude AI?
Claude was created by Anthropic, an AI safety company founded in 2021. Anthropic was started by seven former OpenAI researchers, including CEO Dario Amodei and President Daniela Amodei.
Is Claude AI better than ChatGPT?
It depends on the task. Claude generally outperforms ChatGPT on coding benchmarks, long-document analysis, and nuanced writing. ChatGPT has a broader plugin ecosystem and native image generation. Many professionals use both.
Does Claude store my conversations?
By default, Anthropic may use conversations from consumer accounts to improve its models (you can opt out in settings). Business and API customers can access zero-data-retention options. Conversation data is retained for up to five years unless you delete it manually.
Can Claude generate images?
No. Claude can analyze and describe images, but it cannot generate them. For AI image creation, use Midjourney, DALL-E, or Adobe Firefly.
What is Claude’s context window?
Standard Claude models have a 200,000-token context window — roughly 150,000 words. Enterprise plans extend this to 500,000 tokens. Claude Code supports up to 1 million tokens for large codebase analysis.
How do I access Claude Code?
Claude Code is available as part of the Claude Max subscription ($100+/month) or via the Anthropic API. It runs as a terminal-native tool — install it with npm install -g @anthropic-ai/claude-code and authenticate with your API key.
This guide is updated regularly as Anthropic ships new models and features. Last updated: April 2026.