Category: Industry News & Commentary

Google drops an algorithm update. AI Overviews reshape local search. A new ad format launches on LinkedIn. When something happens that affects how restoration companies market themselves, we break it down — what changed, what it means, and what you should do about it. No recycled press releases, just sharp analysis from someone who actually runs these campaigns.

Industry News and Commentary covers Google algorithm updates, AI search developments, advertising platform changes, marketing technology announcements, regulatory shifts affecting digital marketing, and expert analysis of industry events as they impact restoration contractors, commercial services companies, and the broader property damage restoration ecosystem.

  • Two Fights, One Job: Why RH and GPP Belong in Your Documentation (Just Not Where You Think)

    Two Fights, One Job: Why RH and GPP Belong in Your Documentation (Just Not Where You Think)

    Andy McCabe published something sharp recently, and my first instinct was to push back.

    His post was direct: RH and GPP have nothing to do with your dehumidifier calculation. The ANSI/IICRC S500 doesn’t use them. TPAs are weaponizing them to deny equipment that’s legitimately justified by the actual standard. His argument is airtight, and I told him so in the comments — after I pushed back on one thing.

    Here’s the double take I had to do.

    What McCabe Got Right About Equipment Justification

    The S500 Simple Method is not ambiguous. Dehumidifier calculations start with the cubic footage of affected air in each drying chamber, the class of water loss, and the type of equipment on the truck. A Class 2 loss with an LGR uses a factor of 50 to establish a minimum pint-per-day baseline. A Class 1 uses 100. A Class 3 uses 40. Desiccants are calculated in air changes per hour entirely.

    What you will not find anywhere in that calculation: a field for relative humidity. Or grains per pound.

    When a TPA tells you they won’t approve a dehumidifier because RH isn’t at 70%, they’ve invented a threshold that doesn’t exist in any published standard. McCabe’s response to that Liberty Mutual TPA was exactly right: “What standard is that?” They pointed to their own internal guidelines. Not the S500. Not IICRC. Their guidelines.

    That’s the game — and leading your documentation with atmospheric readings as the justification for your equipment is handing them the tool they use to deny you.

    Stop justifying equipment with RH and GPP. The S500 math is your argument. Use it.

    What I Pushed Back On — and Then Reconsidered

    When I responded to McCabe’s post, I drew on years at Polygon/Munters doing large-loss drying — aircraft carrier decks, document archives, new high-rise commercial construction mid-build. In those environments, RH, GPP, and temperature weren’t optional reads. They were the difference between a completed job and a catastrophic materials failure.

    I’ve seen what happens when you dry too aggressively. And I’ve seen the liability that follows.

    The more I sat with it, the more I realized McCabe and I weren’t in conflict. We were talking about two completely different fights happening on the same job.

    The Two-Track Documentation Standard

    Every water loss has two defensible positions that require documentation. Most contractors are only building one of them.

    Track 1: Equipment Justification (McCabe’s Lane)

    Show your dehu calculation per the S500 — cubic footage, drying class, equipment type, the published factor. Show your air mover count based on affected square footage and materials above dry standard. Show moisture readings proving materials haven’t yet reached the established dry standard.

    This documentation defends your equipment billing against TPA denials based on invented atmospheric thresholds. It’s the argument that holds up in a dispute because it’s grounded in a published ANSI standard — not your opinion, not the adjuster’s internal policy.

    Track 2: Materials Science Documentation (The Lane McCabe Didn’t Cover)

    Here’s where atmospheric readings earn their place in your job file — just not as equipment justification.

    Flooring manufacturers explicitly tie warranty coverage to ambient RH maintenance. Hurst Hardwoods voids their warranty if ambient RH drops below 35% during the life of the floor, citing cracking, delamination, and shrinkage as direct consequences of low humidity. Engineered hardwood manufacturers commonly require 30–50% RH maintenance and list surface checking from improper humidity as an explicit warranty exclusion. Even SERVPRO’s own published guidance notes that rapid drying can cause wood to split.

    This isn’t theoretical. When you dry too aggressively — pushing humidity below manufacturer-specified ranges, running heat drying beyond material tolerances, pulling GPP down faster than the materials can handle — you can void the warranty on floors, adhesives, and engineered wood products that weren’t even damaged by the water event itself.

    Now the homeowner has a materials failure claim three months after you packed out. And the carrier has a documented argument that the damage was caused by the restoration, not the loss.

    Your atmospheric logs are your proof that you didn’t do that.

    What This Looks Like in Practice

    The documentation standard that protects you on both tracks looks like this:

    For equipment: S500 dehu calculation showing class, cubic footage, equipment type, and the published factor. Air mover count tied to affected square footage and material readings above dry standard. Nothing about RH or GPP as justification.

    For materials: Continuous atmospheric logs showing that ambient RH stayed within the manufacturer-specified range for every material type on-site throughout the dry. Temperature logs showing you didn’t apply excessive heat. A record that proves you dried professionally, not just fast.

    One set of data protects you from equipment denials. The other protects you from being blamed for the cracked hardwood, delaminated adhesives, and voided warranties that surface after you’re gone.

    The Bottom Line

    Andy McCabe is doing important work calling out the TPA game of inventing atmospheric thresholds to deny legitimately justified equipment. Every restoration contractor should read his post and internalize the S500 math.

    But don’t stop taking atmospheric readings. Stop leading with them as equipment justification — and start filing them as materials science documentation that proves the quality of your work.

    Two fights. Two documentation tracks. Both matter.

    Find more from Andy McCabe at WaterDamageProfit.com.

    Frequently Asked Questions

    Do RH and GPP belong in a dehu calculation?

    No. Per the ANSI/IICRC S500, dehumidifier calculations use cubic footage of affected air, drying class, and equipment type. RH and GPP are not inputs in the S500 Simple Method and should not be used to justify equipment placement.

    Why should restoration contractors still log RH and GPP?

    Atmospheric readings serve as materials science documentation — proof that drying conditions stayed within manufacturer-specified humidity ranges to protect warranty coverage on hardwood floors, adhesives, and engineered wood products. They protect against post-job liability claims, not equipment denials.

    Can aggressive drying void a flooring warranty?

    Yes. Multiple hardwood flooring manufacturers explicitly void warranties when ambient RH drops below 35%, citing cracking, delamination, and shrinkage as direct results. Drying below those thresholds can create a liability exposure on materials that were undamaged by the original water event.

    What is the S500 Simple Method for dehu calculations?

    The ANSI/IICRC S500 Simple Method calculates minimum dehumidifier capacity by dividing the cubic footage of the drying chamber by a factor based on equipment type and drying class. Class 1 uses a factor of 100, Class 2 uses 50, and Class 3 uses 40 for LGR units.

    What should restoration contractors say when a TPA denies equipment based on RH?

    Ask them to cite the published standard their threshold comes from. If they reference an internal guideline rather than the ANSI/IICRC S500, that threshold has no technical standing. Present your S500-based calculation as the documented industry standard for equipment justification.

  • Claude 4 Release Date & Deprecation: What’s Changing June 2026

    Claude 4 Release Date & Deprecation: What’s Changing June 2026

    Claude AI · Fitted Claude

    Anthropic hasn’t announced a specific “Claude 4” as a distinct release — the current model generation is the Claude 4.x series, with Claude Opus 4.6 and Claude Sonnet 4.6 as the current flagship models. If you’re searching for Claude 4, you’re likely looking for the current generation. Here’s exactly what’s live, what the naming means, and what to watch for next.

    Current status (April 2026): The Claude 4.x model family is live. Claude Opus 4.6 (claude-opus-4-6) and Claude Sonnet 4.6 (claude-sonnet-4-6) are Anthropic’s current production models. These are the “Claude 4” generation.

    The Current Claude 4.x Lineup

    Model API String Status Position
    Claude Opus 4.6 claude-opus-4-6 ✅ Live Flagship / maximum capability
    Claude Sonnet 4.6 claude-sonnet-4-6 ✅ Live Production default / balanced
    Claude Haiku 4.5 claude-haiku-4-5-20251001 ✅ Live Speed / cost efficiency

    Claude Model Naming: How It Works

    Anthropic uses a generation.version naming convention. The “4” in Claude 4.6 denotes the fourth major model generation. The “.6” is a version within that generation — a meaningful update that improves on the generation’s base capabilities without being an entirely new architecture.

    This is why there’s no single “Claude 4 release date” to point to — the Claude 4.x family has been rolling out incrementally, with different model tiers (Haiku, Sonnet, Opus) shipping at different points within the generation. The generation is live; you’re using it now if you’re on current Claude models.

    Claude 4 vs Claude 3: What Changed

    The jump from Claude 3.x to Claude 4.x brought improvements across reasoning, coding accuracy, instruction-following, and agentic capability. Claude 3.5 Sonnet — released in mid-2024 — was the model that first clearly demonstrated Claude could compete with and often exceed GPT-4o on most professional benchmarks. The 4.x series extended those gains.

    The most notable improvements in the 4.x generation: stronger performance on multi-step reasoning, better coherence in long agentic sessions, and improved accuracy on coding tasks including the SWE-bench benchmark for real-world software engineering.

    What Comes After Claude 4.x

    Anthropic hasn’t announced a Claude 5 release date or feature set. Based on the pace of releases — major generations arriving every several months, point releases more frequently — the next major generation will likely arrive within the year. When it does, the pattern will hold: the new mid-tier model (Sonnet) will likely outperform the current top-tier (Opus) on most tasks, at a fraction of the cost.

    For anticipation content on the next Sonnet release, see Claude Sonnet 5: What We Know. For the current model API strings and specs, see Claude API Model Strings — Complete Reference.

    Frequently Asked Questions

    When does Claude 4 come out?

    Claude 4 is already out — the current model generation is Claude 4.x. Claude Opus 4.6 and Claude Sonnet 4.6 are live and in production as of April 2026. There’s no separate “Claude 4” launch pending; you’re on it.

    What is Claude 4?

    Claude 4 refers to Anthropic’s fourth major model generation — currently the Claude 4.x series including Opus 4.6, Sonnet 4.6, and Haiku 4.5. The generation brought improvements in reasoning, coding, instruction-following, and agentic performance over Claude 3.

    Is Claude 4 better than Claude 3?

    Yes, across most benchmarks and practical tasks. The Claude 4.x generation improves on Claude 3 in reasoning accuracy, coding performance, long-context coherence, and agentic capability. Claude 3.5 Sonnet — the bridge between generations — was the model that first demonstrated Claude could consistently outperform GPT-4o on professional tasks.

    Need this set up for your team?
    Talk to Will →

  • Anthropic vs OpenAI: What’s Different, What Matters, and Which to Use

    Anthropic vs OpenAI: What’s Different, What Matters, and Which to Use

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    Anthropic and OpenAI are the two most consequential AI labs in the world right now — and they’re building from fundamentally different starting points. Both are producing frontier AI models. Both have Claude and ChatGPT as their flagship consumer products. But their philosophies, ownership structures, and approaches to AI development diverge in ways that matter for anyone paying attention to where AI is going.

    Short version: OpenAI is larger, older, and has more products. Anthropic is smaller, younger, and more focused on safety as a core design methodology. Both are capable of frontier AI — the difference shows in philosophy and approach more than in raw capability benchmarks.

    Anthropic vs. OpenAI: Side-by-Side

    Factor Anthropic OpenAI
    Founded 2021 2015
    Flagship model Claude GPT / ChatGPT
    Legal structure Public Benefit Corporation For-profit (converted from nonprofit)
    Key investors Google, Amazon Microsoft, various VC
    Safety methodology Constitutional AI RLHF + policy layers
    Consumer product Claude.ai ChatGPT
    Image generation Via API (Vertex AI) DALL-E built in
    Agentic coding tool Claude Code Codex / Operator
    Tool/integration standard MCP (open standard) Function calling / plugins
    Not sure which to use?

    We’ll help you pick the right stack — and set it up.

    Tygart Media evaluates your workflow and configures the right AI tools for your team. No guesswork, no wasted subscriptions.

    The Founding Story: Why Anthropic Split From OpenAI

    Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and several colleagues who had been senior researchers at OpenAI. The departure was driven by disagreements about safety priorities and the pace of commercial development. The founders believed that as AI systems became more capable, the risk of harm grew in ways that required dedicated research and more cautious deployment — not just policy layers added after the fact.

    That founding philosophy is baked into how Anthropic builds Claude. Constitutional AI — Anthropic’s training methodology — teaches Claude to evaluate its own outputs against a set of principles rather than optimizing purely for human approval. The result is a model more likely to push back, express uncertainty, and decline harmful requests even under pressure.

    What Each Company Does Better

    Anthropic’s strengths: Safety methodology, writing quality, instruction-following precision, long-context coherence, and Claude Code for agentic development. The public benefit corporation structure gives leadership more control over deployment decisions than investor pressure would otherwise allow.

    OpenAI’s strengths: Broader product ecosystem, DALL-E image generation built into ChatGPT, more established enterprise relationships, larger user base, and more third-party integrations built on their API over a longer period. GPT-4o is competitive with Claude on most benchmarks.

    The Safety Philosophy Difference

    This is the substantive philosophical divide. Both companies have safety teams and publish research. But Anthropic was founded specifically on the thesis that safety research needs to be a primary design input — not a compliance function. Constitutional AI is an attempt to operationalize that at the training level.

    OpenAI’s approach has historically been more RLHF-forward (reinforcement learning from human feedback) with safety addressed through usage policies and model behavior guidelines. The debate between these approaches is genuinely unresolved in the AI research community — neither has proven definitively superior for long-term safety outcomes.

    For Users: Does the Philosophy Difference Matter?

    Day to day, most users experience the difference as: Claude is more likely to push back, more honest about uncertainty, and more consistent in following complex instructions. ChatGPT has more features in the consumer product — image generation, a wider integration ecosystem — and is more likely to give you what you asked for even if what you asked for is slightly wrong.

    For enterprises evaluating which API to build on: both are capable, both have enterprise tiers, and the choice often comes down to which performs better on your specific workload. For safety-sensitive applications or regulated industries, Anthropic’s explicit safety focus and public benefit structure are meaningful differentiators.

    For the Claude vs. ChatGPT product comparison, see Claude vs ChatGPT: The Honest 2026 Comparison.

    Frequently Asked Questions

    What is the difference between Anthropic and OpenAI?

    Both are frontier AI labs — Anthropic makes Claude, OpenAI makes ChatGPT/GPT. Anthropic was founded by former OpenAI researchers who prioritized safety as a core design methodology. It’s structured as a public benefit corporation. OpenAI is older, larger, and has a broader product ecosystem including image generation and a longer history of enterprise integrations.

    Is Anthropic better than OpenAI?

    Neither is definitively better — they’re different. Claude (Anthropic) tends to win on writing quality, instruction-following, and safety calibration. ChatGPT (OpenAI) wins on ecosystem breadth, image generation, and third-party integrations. The better choice depends on your specific use case.

    Why did Anthropic founders leave OpenAI?

    The Anthropic founders — including Dario and Daniela Amodei — left OpenAI over disagreements about safety priorities and the pace of commercial deployment. They believed AI safety needed to be a primary research focus built into model training, not an add-on. That conviction became Anthropic’s founding mission and Constitutional AI methodology.

  • Claude vs ChatGPT Reddit: What Users Actually Say in 2026

    Claude vs ChatGPT Reddit: What Users Actually Say in 2026

    Claude AI · Fitted Claude

    If you’ve spent any time on Reddit trying to figure out whether Claude or ChatGPT is actually better, you’ve seen the debate play out across r/ChatGPT, r/ClaudeAI, r/artificial, and r/MachineLearning. Here’s what Reddit actually says — the real consensus that emerges from people using both tools daily, not marketing copy.

    Reddit’s general consensus: Claude wins for writing quality, nuanced reasoning, and following complex instructions. ChatGPT wins for integrations, image generation, and ecosystem breadth. Power users often keep both. The Claude subreddit skews toward people who’ve already switched; ChatGPT subreddits have more defenders of the status quo.

    What Reddit Says Claude Does Better

    “Claude doesn’t sound like an AI”

    This is the most consistent thread in Claude discussions on Reddit. Users repeatedly describe Claude’s writing as more natural, less formulaic, less likely to fall into the bullet-point-heavy structure that ChatGPT defaults to. Threads asking “which is better for writing?” heavily favor Claude. The specific complaints about ChatGPT — sycophantic openers, generic structure, “certainly!” affirmations — get cited constantly as reasons people switched.

    Instruction-following and context retention

    Multi-part prompts with specific constraints are a recurring Reddit test. Users report Claude holds requirements more consistently through long responses — if you say “don’t use bullet points” or “write in first person” at the start, Claude is less likely to drift mid-response. ChatGPT gets called out frequently for “forgetting” constraints partway through.

    Honesty about uncertainty

    Reddit threads about AI hallucination tend to frame ChatGPT as more confidently wrong and Claude as more willing to express uncertainty. This matters for research and factual tasks — Claude saying “I’m not certain about this” is more useful than ChatGPT making something up with conviction.

    Long documents and large context

    Users uploading long PDFs, code files, or research papers consistently report better results from Claude. Claude’s 200K context window and coherence across long inputs gets cited as a practical advantage for document-heavy work.

    What Reddit Says ChatGPT Does Better

    Image generation

    DALL-E integration is the most cited ChatGPT advantage. Reddit users who need image generation in their workflow find it more convenient to stay in ChatGPT than to use a separate tool. Claude doesn’t generate images natively in the web interface, which is a real gap for this use case.

    Plugin and integration ecosystem

    ChatGPT’s broader plugin and connection ecosystem gets cited often by users who rely on specific third-party integrations. Although Claude’s MCP integrations are expanding rapidly, ChatGPT has more established connections across consumer apps.

    Code interpreter for data analysis

    ChatGPT’s ability to run Python in-chat, generate charts, and work interactively with data files is repeatedly cited as a concrete advantage. Reddit users doing exploratory data analysis prefer ChatGPT’s sandbox for this specific workflow.

    The Honest Reddit Meta-Conclusion

    The most upvoted takes on Reddit tend to be: use Claude as your primary tool if you do writing, analysis, or complex reasoning work. Keep ChatGPT for image generation and integrations. The “I switched to Claude and never looked back” posts get more engagement than the reverse — but the “I use both and they serve different purposes” takes are probably the most accurate.

    For a structured comparison rather than crowd sentiment, see Claude vs ChatGPT: The Honest 2026 Comparison and Is Claude Better Than ChatGPT?

    Frequently Asked Questions

    What does Reddit say about Claude vs ChatGPT?

    Reddit’s general consensus favors Claude for writing quality, instruction-following, and nuanced reasoning, while ChatGPT wins for image generation and integrations. Power users typically keep both. The Claude subreddit (r/ClaudeAI) skews heavily toward satisfied switchers.

    Is Claude more popular than ChatGPT on Reddit?

    ChatGPT has a larger subreddit by subscriber count. Claude’s subreddit (r/ClaudeAI) is smaller but highly engaged and skews toward daily professional users. The cross-subreddit sentiment on comparison threads consistently shows Claude gaining ground in preference, particularly for writing tasks.

    Why do Reddit users prefer Claude for writing?

    The most cited reasons: Claude produces more natural prose that doesn’t immediately read as AI-generated, it follows style instructions more precisely, and it’s less likely to default to formulaic structures. Reddit users specifically criticize ChatGPT’s tendency toward sycophantic openers and excessive bullet points — Claude avoids both more reliably.

    Need this set up for your team?
    Talk to Will →

  • What UCP Teaches Us About RCP: How Open Protocols Create Industry Movements

    What UCP Teaches Us About RCP: How Open Protocols Create Industry Movements

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    When Google launched the Universal Commerce Protocol at NRF in January 2026, the announcement was framed as an e-commerce story. Shopify, Walmart, Target, Visa — merchants and payment processors getting their systems ready for AI agents that shop, compare, and execute purchases without human intervention. That framing is correct but incomplete. UCP is not just a commerce standard. It is a template for how open protocols create movements.

    The Restoration Carbon Protocol is a different kind of standard in a completely different industry. But when you understand what UCP actually does architecturally — and why it succeeded where dozens of previous e-commerce APIs failed — you start to see exactly how RCP gets from a 31-article framework on tygartmedia.com to an industry-wide adopted standard that BOMA, IFMA, and institutional ESG reporters actually depend on.

    The mechanism is the same. The domain is different. And there is a version two of RCP that plugs directly into the UCP trust architecture — if the restoration industry moves in the next 18 months.


    What UCP Actually Does That Previous Commerce APIs Didn’t

    The history of e-commerce is littered with failed attempts at standardization. Every major platform — Amazon, eBay, Shopify, Magento — built its own API. Merchants implemented each one separately. Integrators spent years building custom connectors. The problem was not technical. The problem was trust and authentication. Every API required a bilateral relationship: the merchant trusted this specific buyer’s agent, that agent trusted this specific merchant’s data. Scaling to the open web required n² trust relationships. It never worked.

    UCP solved this with a different architecture. Instead of bilateral trust, it established a protocol layer — a shared standard that any compliant agent and any compliant merchant can speak without a pre-existing relationship. An AI agent that implements UCP can query any UCP-compliant catalog, check any UCP-compliant inventory, and execute against any UCP-compliant checkout — not because it has a relationship with that merchant, but because both parties speak the same authenticated protocol.

    The authentication is the product. UCP’s standardized interface means that a merchant’s decision to implement the protocol is simultaneously a decision to trust any UCP-authenticated agent. The trust is embedded in the standard, not in the bilateral relationship.

    Google’s Agent Payments Protocol (AP2), which sits alongside UCP, formalized this with “mandates” — digitally signed statements that define exactly what an agent is authorized to do and spend. The mandate is the credential. Any merchant who accepts UCP mandates accepts a verifiable statement of agent authorization without knowing anything specific about the agent that issued it.

    That architecture — open protocol, embedded authentication, mandate-based trust — is exactly what the restoration industry needs for Scope 3 emissions data. And RCP v1.0 has already built the content layer. The question for v2 is whether to build the authentication layer.


    The RCP Authentication Problem (That UCP Already Solved)

    RCP v1.0 produces per-job emissions records — JSON-structured Job Carbon Reports that restoration contractors deliver to commercial property clients for their GRESB, SBTi, and SB 253 reporting. The framework is solid. The methodology is sourced and auditable. The schema is machine-readable.

    But right now, there is no authentication layer. A property manager who receives an RCP Job Carbon Report from a contractor has no way to verify that the contractor actually follows the methodology, uses the current emission factors, or has gone through any validation process. They have to trust the contractor’s word — which is exactly the problem that makes Scope 3 data from supply chains unreliable for ESG auditors.

    This is the bilateral trust problem all over again. The property manager trusts this specific contractor’s data. That contractor trusts this specific property manager’s reporting process. It does not scale to a portfolio of 200 contractors across 800 properties.

    UCP solved the equivalent problem in commerce. The RCP organization — whoever formally governs the standard — can solve the same problem in ESG supply chain reporting with an analogous architecture.


    What RCP Certification Could Look Like in a UCP-Style Architecture

    Imagine a restoration contractor completes an RCP certification process. They demonstrate that they collect the 12 required data points, apply the current emission factors, produce Job Carbon Reports in the RCP-JCR-1.0 schema, and maintain source documents for seven years. The RCP organization validates this and issues a cryptographically signed certification credential — an RCP Mandate.

    The RCP Mandate is the contractor’s credential. It is not issued to a specific property manager. It is not dependent on a bilateral relationship. It is a verifiable statement, signed by the RCP authority, that this contractor’s emissions data meets the methodology standard. Any property manager, ESG platform, or auditor who accepts RCP Mandates can trust the data from any RCP-certified contractor — not because they know that contractor, but because the standard’s authentication is embedded in the credential.

    This is precisely how UCP mandates work in commerce. The signed statement creates protocol-level trust that does not require a pre-existing relationship.

    The downstream effects are the same as in commerce:

    • For contractors: RCP certification becomes a competitive signal that travels with the data. An RCP Mandate delivered with a Job Carbon Report tells the property manager’s ESG team: this data does not need to be validated separately. It has already been validated by a recognized standard.
    • For property managers: They can accept RCP-certified contractor data directly into their ESG reporting workflows without manual review. The certification is the audit trail. Measurabl, Yardi Elevate, and Deepki — the ESG data management platforms most of them use — can be built to accept RCP Mandate credentials alongside RCP JSON records and flag them automatically as verified-methodology data.
    • For ESG auditors: A property portfolio where all restoration contractor data comes from RCP-certified vendors is auditable without going back to each contractor. The mandate chain is the evidence. Limited assurance under CSRD or SB 253 becomes a single check — are these vendors RCP-certified? — rather than a vendor-by-vendor methodology review.
    • For the industry: Certification creates a selection mechanism. Property managers who require RCP-certified vendors in their preferred contractor agreements are no longer asking for a one-off document. They are asking for protocol compliance — the same way a merchant asking for UCP compliance is not asking for a custom integration, they are asking for standards adoption.

    The Protocol Stack for RCP v2

    Following the UCP architecture model, a complete RCP v2 would have three layers — matching the commerce, payments, and infrastructure layers of the agentic commerce stack:

    Layer 1: The Data Layer (Already Built — RCP v1.0)

    The methodology, emission factors, JSON schema, five job type guides, audit readiness documentation, and public API. This is the equivalent of UCP’s catalog query and inventory check layer — the standardized interface for what data is produced and how it is structured. RCP v1.0 is complete at this layer.

    Layer 2: The Authentication Layer (RCP v2 Target)

    The certification program, the mandate credential, the verification mechanism. This is the equivalent of UCP’s trust and authentication architecture — the layer that makes data from one party trusted by another without a bilateral relationship. Key components:

    • RCP Contractor Certification: documented audit of data capture practices, schema compliance, emission factor vintage, and source document retention
    • RCP Mandate: cryptographically signed certification credential, issued per contractor, versioned to the RCP release used, with an expiration and renewal cycle
    • Mandate verification endpoint: a public API (building on the existing tygart/v1/rcp namespace) where any platform can POST a mandate token and receive a verified/not-verified response with credential metadata
    • Certified contractor registry: a public directory of RCP-certified organizations, queryable by name, state, and certification status

    Layer 3: The Infrastructure Layer (RCP v2 Target)

    The machine-to-machine data exchange infrastructure — the equivalent of MCP and A2A in the agentic commerce stack. A contractor’s job management system (Encircle, PSA, Dash, Xcelerate) that natively implements RCP can transmit certified Job Carbon Reports directly to a property manager’s ESG platform without human intermediation. The report travels with the mandate credential. The platform verifies the credential, ingests the data, and flags it as RCP-verified — automatically. No email, no manual upload, no data entry.

    This is what makes it a movement rather than a document standard. The data flows automatically between authenticated parties. The human steps are eliminated. The protocol becomes infrastructure.


    Why Open Protocol Architecture Enables Movements

    UCP didn’t succeed because Google built good documentation. It succeeded because Google made it open — any merchant can implement it, any agent can speak it, no license fee, no bilateral negotiation, no approval required. Shopify and a regional boutique retailer are equal participants in the UCP ecosystem because the protocol is the credential, not the relationship with Google.

    That openness is what creates network effects. Every new UCP-compliant merchant makes the protocol more valuable for every agent. Every new UCP-compliant agent makes the protocol more valuable for every merchant. The standard grows because participation is self-reinforcing.

    RCP v1.0 is already open. The framework is CC BY 4.0 — free to use, implement, and build upon. The API is public. The emission factors are published with sources. Any restoration company can implement it today without permission.

    What RCP v2 adds is the authentication layer that makes open participation verifiable. The difference between “any company claims to follow RCP” and “any company can prove they follow RCP” is the difference between a document standard and a protocol. And the difference between a protocol and a movement is whether the infrastructure layer — the machine-to-machine data exchange — gets built.

    The agentic commerce stack took 18 months from UCP’s launch to meaningful adoption in production commerce systems. The RCP timeline is not 18 months from today — it’s 18 months from the moment RIA, IICRC, or a major industry insurer formally endorses the standard. That endorsement is the equivalent of Shopify and Walmart signing on to UCP at NRF. It’s the signal that tells the rest of the ecosystem: this is the standard, build to it.


    The Restoration Industry’s Unique Position

    BOMA and IFMA are working the problem from the property owner side — how do we get our vendor supply chains to report Scope 3 data? They don’t have the answer because the answer requires contractor-side infrastructure that commercial real estate organizations cannot build. They can mandate data. They cannot build the methodology.

    The restoration industry can. The 12 data points are already defined. The five job type methodologies are already published. The JSON schema is live. The API is running. The audit readiness guide exists. The only missing component is the formal certification program and the mandate credential that makes all of it protocol-grade rather than document-grade.

    This is what positions restoration as the leading industry in commercial property Scope 3 compliance — not just a participant but the infrastructure provider. The industry that built the standard that the property management industry depends on. That is a fundamentally different value proposition than “we report our emissions.”

    The parallel to UCP is exact: Google didn’t just participate in e-commerce. They built the protocol layer that made agentic commerce possible at scale. The restoration industry, through RCP, can build the protocol layer that makes supply chain Scope 3 compliance possible at scale for commercial real estate. And unlike Google, the restoration industry doesn’t need to be invited to the table. The table was already set at tygartmedia.com/rcp.


    What RIA Savannah Should Start

    The conversation at RIA Savannah on April 27 isn’t about persuading the industry to care about carbon. It’s about presenting the infrastructure that already exists and asking whether the industry wants to formally govern it. The RCP v1.0 framework, the public API, the certification roadmap — these are things that exist today. The question for RIA leadership is whether they want the restoration industry to own the protocol layer for commercial property Scope 3 compliance, or whether they want to watch a property management trade association or a Canadian software company build something proprietary in their place.

    The window is real. ESG data platforms are making vendor integration decisions now. Property managers are establishing preferred contractor Scope 3 requirements now. California SB 253’s Scope 3 deadline is 2027. GRESB assessments with contractor data coverage scoring are active this year. The infrastructure moment is not coming. It is here.

    A movement needs three things: an open standard, an authentication layer, and a network effect. RCP v1.0 is the standard. The authentication layer is the RCP v2 roadmap. The network effect starts the moment an industry organization formally endorses the protocol and restoration contractors have a reason to get certified rather than merely compliant.

    That is what UCP teaches us about RCP. The protocol is not the product. The authenticated, machine-readable, verifiable data infrastructure that emerges from the protocol is the product. And the industry that builds that infrastructure owns the category.

  • Claude AI Pricing: Every Plan and API Rate (April 2026)

    Claude AI Pricing: Every Plan and API Rate (April 2026)

    Claude AI · Fitted Claude

    Anthropic’s pricing structure has more tiers, models, and billing modes than most people realize — and it changes with every major model release. This is the complete, updated breakdown of every Claude plan in April 2026: personal tiers, API pricing by model, Claude Code, Enterprise, and the student and team options most guides miss.

    The short version: Free (limited daily use) → Pro $20/mo (daily driver) → Max $100/mo (power users) → Team $30/user/mo (small teams) → API (pay per token, billed via Anthropic Console) → Enterprise (custom). Claude Code has its own Pro and Max tiers. Most people need Pro or the API — not both.

    Every Claude Plan at a Glance

    Plan Price Best for Models included
    Free $0 Casual / occasional use Sonnet (limited)
    Pro $20/mo Individual daily use Haiku, Sonnet, Opus
    Max $100/mo Heavy individual use All models, 5× Pro limits
    Team $30/user/mo Small teams (5+ users) All models, shared billing
    Enterprise Custom Large orgs, compliance needs All models + SSO, audit logs
    API Per token Developers building on Claude All models, programmatic access
    Claude Code Pro $100/mo Developer agentic coding All models + Code agent
    Claude Code Max $200/mo Heavy agentic coding All models, 5× Code Pro limits

    Claude Pro: $20/Month — The Standard Tier

    Claude Pro is the tier the majority of regular users land on, and it’s priced identically to ChatGPT Plus. At $20/month you get:

    • Access to all current models — Haiku (fast/cheap), Sonnet (balanced), and Opus (most powerful)
    • Roughly 5× the daily usage of the free tier
    • Priority access during peak hours so you’re not sitting in a queue
    • Full Projects functionality for organizing work by client or topic
    • Extended context windows for long document work

    For most knowledge workers — writers, analysts, consultants, marketers — Pro is where the cost/value ratio peaks. The step up to Max only makes sense if you’re consistently pushing through Pro’s limits, which requires intentional heavy use.

    Claude Max: $100/Month — For Power Users

    Max gives you 5× Pro’s usage limits. The math is straightforward: if Pro gets you through a full workday without hitting limits, Max gets you through five of those days on the same reset cycle. The target user is someone running extended agentic sessions, doing deep multi-document research, or using Claude as infrastructure rather than a tool.

    Max is not the right upgrade if you’re hitting Pro limits occasionally. It’s the right upgrade if you’re hitting them daily and it’s affecting your work.

    Claude Team: $30/User/Month — The Collaboration Tier

    Team sits between Pro and Enterprise and is designed for groups of five or more people who want shared billing, slightly higher usage limits than Pro, and the ability to collaborate on Projects. At $30/user/month it’s a meaningful premium over Pro but substantially cheaper than enterprise contracts.

    The Team plan also includes longer context windows and the ability to share Projects across team members — which is the primary reason to choose it over just buying everyone a Pro subscription independently.

    Claude Enterprise: Custom Pricing

    Enterprise is for organizations with compliance requirements, single sign-on needs, audit logging, data residency controls, or volume large enough that custom pricing makes financial sense. Anthropic doesn’t publish Enterprise pricing — you contact their sales team.

    The meaningful additions over Team: SSO/SAML integration, admin controls and usage reporting, data handling agreements for regulated industries, and the ability to set organization-wide guardrails on model behavior. If your legal team has opinions about where AI-generated data lives, Enterprise is the tier that answers those questions.

    Claude API Pricing: By Model (April 2026)

    API pricing is billed per token — the unit of text Claude processes. One token is roughly four characters or about three-quarters of a word. Pricing is set separately for input tokens (what you send) and output tokens (what Claude returns), with output typically costing more.

    Model Input (per M tokens) Output (per M tokens) Best for
    Claude Haiku ~$1.00 ~$5.00 High-volume, fast tasks
    Claude Sonnet ~$3.00 ~$5.00 Balanced quality/cost
    Claude Opus ~$5.00 ~$25.00 Complex reasoning, quality-critical

    These are approximate figures — Anthropic updates API pricing with each model generation and publishes exact current rates on their pricing page. The Batch API offers roughly 50% off listed rates for non-time-sensitive workloads, which is significant for anyone running content or data pipelines.

    Claude Code Pricing: The Agentic Developer Tier

    Claude Code is Anthropic’s dedicated agentic coding tool — a command-line agent that can read files, write code, run tests, and work autonomously on a real codebase. It’s a different product category from the web interface and has its own pricing structure.

    • Claude Code (included with Pro/Max) — limited access, sufficient for occasional coding sessions
    • Claude Code Pro ($100/mo) — full access for developers using it as a primary coding environment
    • Claude Code Max ($200/mo) — for teams or individuals running heavy autonomous coding workloads

    The question of whether Claude Code Pro is worth $100/month depends entirely on how much of your daily work it replaces. For a developer who would otherwise spend several hours on tasks Claude Code handles autonomously, the math works quickly. For occasional use, the included access with a standard Pro or Max subscription is sufficient.

    Claude Pricing vs ChatGPT Plus: The Direct Comparison

    Tier Claude ChatGPT
    Standard paid Pro $20/mo Plus $20/mo
    Power user Max $100/mo No direct equivalent
    Team $30/user/mo $30/user/mo
    Developer agentic coding Code Pro $100/mo No direct equivalent
    Image generation Not included DALL-E included
    API cheapest model Haiku ~$1.00/M GPT-4o mini ~$0.15/M

    Is There a Student Discount?

    Anthropic has not launched a widely available student pricing tier as of April 2026. Some universities have enterprise agreements that include Claude access — worth checking with your institution’s IT or library resources before paying out of pocket. There is a Claude for Education initiative but it’s directed at institutions rather than individual students.

    The free tier remains the most reliable option for students who need Claude access without spending money. For students who use it intensively for research or writing, Pro at $20/month is the realistic next step.

    How Claude Billing Actually Works

    For web interface plans (Free, Pro, Max, Team): monthly subscription billed to a card, cancel anytime, no annual commitment required.

    For API: prepaid credits loaded into the Anthropic Console. You buy credits in advance and they draw down as you use the API. There’s no surprise bill — when you run out of credits, API calls stop until you add more. Usage reporting is available in the Console so you can see exactly which models and how many tokens you’re consuming.

    Which Plan Is Right for You

    Choose Free if: you use AI occasionally, want to try Claude before committing, or use it as a secondary tool.

    Choose Pro if: Claude is part of your daily workflow — writing, analysis, research, content, strategy. This is the right tier for most professionals.

    Choose Max if: you’re consistently hitting Pro limits mid-day and it’s affecting your output.

    Choose Team if: you need shared billing and Projects across 5+ people.

    Choose API if: you’re a developer building applications with Claude, running automated pipelines, or integrating Claude into your own tools.

    Choose Claude Code Pro if: you’re a developer who wants Claude to work autonomously in your codebase — not just answer questions about code.

    Frequently Asked Questions

    How much does Claude cost per month?

    Claude is free with daily limits — see exactly what the free tier includes. Claude Pro is $20/month. Claude Max is $100/month. Claude Team is $30 per user per month. Claude Code Pro is $100/month and Claude Code Max is $200/month. API pricing is separate and billed per token.

    What is Claude Max and is it worth it?

    Claude Max is $100/month and gives 5× the usage limits of Pro. It’s worth it if you regularly hit Pro limits during heavy work sessions. If you’re not pushing through Pro limits consistently, Max isn’t necessary.

    How much does the Claude API cost?

    Claude API pricing varies by model. Haiku (fastest, cheapest) runs approximately $1.00 per million input tokens. Sonnet (balanced) runs approximately $3.00 per million input tokens. Opus (most powerful) runs approximately $5.00 per million input tokens. Output tokens cost more than input. The Batch API offers approximately 50% off for non-time-sensitive jobs.

    What is Claude Team and how is it different from Pro?

    Claude Team is $30/user/month (minimum 5 users) and adds shared Projects, centralized billing, and slightly higher usage limits compared to individual Pro subscriptions. It’s designed for small teams collaborating on Claude-powered work rather than buying separate Pro accounts.

    Is Claude cheaper than ChatGPT?

    At the base paid tier, both Claude Pro and ChatGPT Plus are $20/month — identical pricing. Claude has a $100/month Max tier with no direct ChatGPT equivalent. On the API, ChatGPT’s cheapest models (GPT-4o mini) are less expensive per token than Claude Haiku, but the models serve different use cases. For most professionals comparing the two, the subscription pricing is a tie.

    Need this set up for your team?
    Talk to Will →

  • Claude vs ChatGPT: The Honest 2026 Comparison

    Claude vs ChatGPT: The Honest 2026 Comparison

    Claude AI · Fitted Claude

    Two AI assistants dominate the conversation right now: Claude and ChatGPT. If you’re trying to decide which one belongs in your workflow, you’ve probably already noticed that most “comparisons” online are surface-level takes written by people who spent an afternoon with each tool.

    This isn’t that. I run an AI-native agency that uses both tools daily across content, code, SEO, and client strategy. Here’s what actually separates them in 2026 — and when each one wins.

    Quick answer: Claude is better for long-context analysis, writing quality, and following complex instructions without drift. ChatGPT is better for integrations, image generation, and breadth of third-party plugins. For most knowledge workers, Claude is the daily driver — ChatGPT is the specialist.

    The Fast Verdict: Category by Category

    Category Claude ChatGPT Notes
    Writing quality ✅ Wins Less sycophantic, more natural voice
    Following complex instructions ✅ Wins Holds multi-part instructions without drift
    Long document analysis ✅ Wins 200K token context vs GPT-4o’s 128K
    Coding ✅ Slight edge Claude Code is a dedicated agentic coding tool
    Image generation ✅ Wins DALL-E 3 built in; Claude has no native image gen
    Third-party integrations ✅ Wins GPT’s plugin/Custom GPT ecosystem is larger
    Web search ✅ Slight edge Both have web search; GPT’s is more integrated
    Pricing (base) Tie Tie Both $20/mo for Pro/Plus; API costs comparable
    Not sure which to use?

    We’ll help you pick the right stack — and set it up.

    Tygart Media evaluates your workflow and configures the right AI tools for your team. No guesswork, no wasted subscriptions.

    Writing Quality: Why Claude Has a Distinct Edge

    The difference becomes obvious when you give both models the same writing task and read the outputs side by side. ChatGPT has a tendency to over-affirm, over-structure, and reach for generic phrasing. Ask it to write a LinkedIn post and you’ll often get something that reads like a LinkedIn post — in the worst way.

    Claude’s outputs read closer to how a thoughtful human actually writes. Sentences vary. Paragraphs breathe. It doesn’t reflexively add a bullet list to every response or pepper the text with unnecessary bold text. It also pushes back more readily when an instruction doesn’t quite make sense, rather than producing confident-sounding nonsense.

    For any work that ends up in front of clients, readers, or stakeholders, Claude’s writing quality is a meaningful advantage. This holds for long-form articles, email drafts, executive summaries, and proposal copy.

    Context Window: The Practical Difference

    Claude’s context window — the amount of text it can hold and reason over in a single conversation — is substantially larger than ChatGPT’s standard offering. Claude Sonnet and Opus both support up to 200,000 tokens. GPT-4o tops out at 128,000 tokens.

    In practice, this matters for:

    • Analyzing long contracts, reports, or research documents in one pass
    • Working with large codebases without losing track of what’s already been discussed
    • Multi-document analysis where you need to synthesize across sources
    • Long agentic sessions where conversation history is critical

    If you regularly work with documents over 50–80 pages or run long agentic workflows, Claude’s context advantage is a functional one, not just a spec sheet number.

    Instruction Following: Where Claude Consistently Outperforms

    Give Claude a complex, multi-part instruction with specific constraints — “write this in third person, under 400 words, no bullet points, mention X and Y but not Z, match this tone” — and it tends to hold all of those requirements across the full response. ChatGPT frequently drifts, especially on longer outputs.

    This matters most for:

    • Prompt-heavy workflows where precision is required
    • Batch content generation with strict brand voice rules
    • Agentic tasks where Claude is executing multi-step operations
    • Any scenario where you’ve spent time engineering a precise prompt

    Anthropic built Claude with a focus on being genuinely helpful without being sycophantic — meaning it’s designed to give you the accurate answer, not the agreeable one. In practice, Claude is more likely to flag when something in your request is unclear or contradictory rather than guessing and producing something confidently wrong.

    Coding: Claude Code vs ChatGPT

    For general coding questions — syntax, debugging, explaining code — both models perform well. The meaningful differentiation is at the agentic level.

    Anthropic’s Claude Code is a dedicated command-line coding agent that can work autonomously on a codebase: reading files, writing code, running tests, and iterating. It’s a different category of tool than ChatGPT’s code interpreter, which executes code in a sandboxed environment but doesn’t have the same level of agentic control over a real development environment.

    For developers running AI-assisted workflows on actual projects, Claude Code is the more serious tool in 2026. For casual code help or one-off scripts, the gap is smaller.

    Where ChatGPT Wins: Image Generation and Ecosystem

    ChatGPT has a clear advantage in two areas that matter to a lot of users.

    Image generation: DALL-E 3 is built directly into ChatGPT Plus. You can go from text to image in one conversation. Claude has no native image generation capability — you’d need to use a separate tool like Midjourney, Adobe Firefly, or Imagen on Google Cloud.

    Third-party integrations: OpenAI’s plugin ecosystem and Custom GPTs have more breadth than Claude’s integrations. If you rely on specific third-party tools (Zapier, specific APIs, custom workflows), there’s more infrastructure already built around ChatGPT.

    If image creation is a daily part of your workflow, or you’re heavily invested in a ChatGPT-centric tool stack, these advantages are real.

    Claude vs ChatGPT for Coding Specifically

    When coding is the primary use case, the comparison shifts toward Claude — but it’s worth being precise about why.

    For writing clean, well-commented code from scratch, Claude tends to produce cleaner output with better reasoning explanations. It’s less likely to hallucinate function signatures or library methods. For debugging, Claude’s ability to hold large code files in context without losing track is a functional advantage.

    ChatGPT’s code interpreter (now called Advanced Data Analysis) is strong for data science workflows — running actual Python in a sandbox, generating visualizations, processing files. If your coding work is primarily data analysis and you want execution in the same tool, ChatGPT has the edge there.

    Claude vs ChatGPT for Writing Specifically

    For any writing that requires a genuine human voice — op-eds, thought leadership, nuanced argument — Claude is the better instrument. Its outputs require less editing to remove the robotic, list-heavy, over-hedged quality that plagues a lot of AI-generated content.

    For template-heavy writing — product descriptions, SEO-optimized articles at scale, standardized reports — the gap is smaller and comes down to your specific prompting setup.

    What Reddit Actually Says

    The Claude vs ChatGPT debate on Reddit (r/ChatGPT, r/ClaudeAI, r/artificial) consistently surfaces a few recurring themes:

    • Writers and researchers prefer Claude — repeatedly cited for better prose and genuine analysis
    • Developers are more split — Claude Code has built a dedicated following, but the ChatGPT ecosystem is more familiar
    • ChatGPT wins on integrations — the plugin/Custom GPT ecosystem still has more breadth
    • Claude is less annoying — specific complaints about ChatGPT’s sycophancy appear frequently (“it agrees with everything”, “it always says ‘great question’”)
    • Both have gotten better fast — direct comparisons from 2023–2024 often don’t hold in 2026

    Pricing: What You Actually Pay

    The base subscription pricing is identical: $20/month for Claude Pro and $20/month for ChatGPT Plus — see the full Claude pricing breakdown for everything beyond the base tier. If you’re wondering what the free tier actually includes before committing, see what Claude’s free tier gets you in 2026. Both include web search, file uploads, and access to advanced models.

    Where it diverges:

    • Claude Max ($100/mo) — for power users who need 5x the usage of Pro
    • ChatGPT doesn’t have a direct equivalent tier between Plus and Enterprise
    • API pricing — comparable but varies by model; Anthropic’s pricing is token-based and published transparently
    • Claude Code — has its own pricing structure for the agentic coding tool

    For most individual users, the $20/mo tier is the right starting point for either tool.

    Which One Is Actually Better in 2026?

    The honest answer: Claude is better for the work that benefits most from language quality, reasoning depth, and instruction precision. ChatGPT is better for the work that benefits from breadth of integrations and built-in image generation.

    For a solo operator, consultant, or knowledge worker whose primary outputs are written analysis, content, and strategy: Claude is the better daily driver. The writing is cleaner, the reasoning is more reliable, and the context window is more practical for serious document work.

    For a team already embedded in the OpenAI ecosystem — with Custom GPTs, plugins, and Zapier workflows built around ChatGPT — switching has real friction that may not be worth it unless writing quality is a high-priority problem.

    The most pragmatic setup for serious users — check the Claude model comparison to understand which tier makes sense for your work, and the Claude prompt library to get the most out of whichever you choose. The most pragmatic setup for serious users: Claude for thinking and writing, access to ChatGPT for when you need DALL-E or a specific integration it covers. At $20/month each, running both is a reasonable choice if the work justifies it.

    Frequently Asked Questions

    Is Claude better than ChatGPT?

    For writing quality, complex instruction following, and long-document analysis, Claude outperforms ChatGPT in most head-to-head tests. ChatGPT has the advantage in image generation and third-party integrations. The right answer depends on your primary use case.

    Can I use both Claude and ChatGPT?

    Yes, and many power users do. Both have $20/month Pro tiers. Running both gives you Claude’s writing and reasoning strength alongside ChatGPT’s DALL-E image generation and broader plugin ecosystem.

    Which is better for coding — Claude or ChatGPT?

    Claude has a slight edge for writing clean code and agentic coding workflows via Claude Code. ChatGPT’s Advanced Data Analysis (code interpreter) is better for data science work where you need code execution in a sandboxed environment. For general coding help, both are strong.

    Which AI is better for writing?

    Claude consistently produces better writing — less generic, less sycophantic, and closer to a natural human voice. Writers, editors, and content strategists repeatedly report that Claude’s outputs require less editing and drift less from the intended tone.

    Is Claude free to use?

    Claude has a free tier with limited daily usage. Claude Pro is $20/month and provides significantly more capacity. Claude Max at $100/month is for heavy users. API access is billed separately by token usage.

    Need this set up for your team?
    Talk to Will →

  • Why AI Agents Are Different From Chatbots, Automations, and APIs

    Why AI Agents Are Different From Chatbots, Automations, and APIs

    These terms get used interchangeably. They’re not the same thing. Here’s the actual distinction between each one, where the lines get genuinely blurry, and which category fits what you’re actually trying to build.

    Chatbots

    A chatbot is a software interface designed to simulate conversation. The defining characteristic: it’s stateless and reactive. You send a message; it responds; the exchange is complete. Each interaction is largely independent.

    Traditional chatbots (pre-LLM) operated on decision trees — “if the user says X, respond with Y.” Modern LLM-powered chatbots use language models to generate responses, which makes them dramatically more capable and flexible — but the fundamental architecture is the same: you ask, it answers, you ask again.

    What chatbots are good at: answering questions, providing information, routing conversations, handling defined service scenarios with natural language flexibility. What they’re not: action-takers. A chatbot can tell you how to cancel your subscription. An agent can cancel it.

    Automations

    Automations are rule-based workflows that execute when triggered. Zapier, Make, and similar tools are the canonical examples. When event A happens, do B, then C, then D.

    The key characteristic: the path is predefined. Every step is specified by the person who built the automation. If an unexpected situation arises that the automation wasn’t built for, it either fails or skips the step. There’s no reasoning about what to do — there’s only executing the specified path or not.

    Automations are highly reliable for well-defined, stable processes. They break when edge cases arise that weren’t anticipated. They scale perfectly for the exact task they were built for; they don’t generalize.

    APIs

    An API (Application Programming Interface) is a communication contract — a defined way for software systems to talk to each other. APIs are infrastructure, not agents or automations. They’re the mechanism through which agents and automations take action in external systems.

    When an AI agent “uses Slack,” it’s calling Slack’s API. When an automation “posts to Twitter,” it’s calling Twitter’s API. The API is the door; agents and automations are the things that open it.

    Conflating APIs with agents is a category error. An API is a tool, not a behavior pattern.

    AI Agents

    An AI agent takes a goal and figures out how to accomplish it, using tools available to it, handling unexpected situations along the way, without a human specifying each step.

    The distinguishing characteristics versus the above:

    • vs. Chatbots: Agents take action in the world; chatbots respond to messages. An agent can book the flight, not just tell you how to book it.
    • vs. Automations: Agents reason about what to do next; automations execute predefined paths. When an unexpected situation arises, an agent adapts; an automation fails or skips.
    • vs. APIs: APIs are tools an agent uses; they’re not the agent itself. The agent is the reasoning layer that decides which API to call and what to do with the result.

    Where the Lines Actually Blur

    In practice, real systems often combine these categories:

    LLM-powered chatbots with tool access: A customer service chatbot that can look up your order status, initiate a return, and send a confirmation email is starting to look like an agent — it’s taking actions, not just responding. The boundary between “advanced chatbot” and “limited agent” is genuinely fuzzy.

    Automations with AI decision steps: A Zapier workflow with an OpenAI or Claude step in the middle isn’t purely rule-based anymore — the AI step can produce variable outputs that affect what the automation does next. This is a hybrid: mostly automation, partly agentic.

    Agents with constrained scopes: An agent restricted to a single tool and a narrow task class starts to look like a sophisticated automation. The more constrained the scope, the more the distinction collapses in practice.

    The useful question isn’t “what category is this?” but “is this system reasoning about what to do, or executing a predefined path?” That’s the actual distinction that matters for how you build, monitor, and trust it.

    Why the Distinction Matters Operationally

    Reliability profile: Automations fail predictably — when an edge case hits a path that wasn’t built. Agents fail unpredictably — when their reasoning goes wrong in a way you didn’t anticipate. Different failure modes require different monitoring approaches.

    Maintenance overhead: Automations require explicit updates when processes change. Agents adapt to process changes automatically — but may adapt in unexpected ways that need to be caught and corrected.

    Auditability: Automations are fully auditable — you can read the workflow and know exactly what it does. Agents are less auditable — you can inspect their actions, but not fully predict them in advance. For compliance-sensitive contexts, this matters significantly.

    Build cost: Automations are faster to build for well-defined, stable processes. Agents are faster to deploy when the process is complex, variable, or not fully specified — because you’re specifying a goal rather than a procedure.

    For what agents can actually do in production: What AI Agents Actually Do. For a business owner’s introduction: AI Agents Explained for Business Owners. For hosted agent infrastructure: Claude Managed Agents FAQ.


    Hosted agent infrastructure pricing: Claude Managed Agents Pricing Reference.

  • What AI Agents Actually Do (Not the Hype Version)

    What AI Agents Actually Do (Not the Hype Version)

    Not the version where AI agents are going to replace all human jobs by 2030. The actual version, right now, based on what’s deployed in production.

    The Actual Definition

    What an AI agent is

    Software that takes a goal, breaks it into steps, uses tools to execute those steps, handles errors along the way, and keeps working without you directing every action. The distinguishing characteristic is autonomous multi-step execution — not just answering a question, but completing a task.

    The Key Distinction: One-Shot vs. Agentic

    Most people’s experience with AI is one-shot: you type something, the AI responds, the exchange is complete. That’s a language model doing inference. An AI agent is different in one specific way: it takes actions, checks results, and takes more actions based on what it found — often dozens of steps — without you approving each one.

    Example of one-shot AI: “Summarize this document.” You paste the document, the AI returns a summary. Done.

    Example of an AI agent doing the same task: “Research this topic and produce a summary with verified sources.” The agent searches the web, reads multiple pages, identifies conflicts between sources, runs additional searches to resolve them, synthesizes findings, and returns a summary with citations — without you specifying each search query or each page to read. You gave it a goal; it handled the steps.

    What Agents Can Actually Do

    The tools an agent can use define its capability surface. Common tool categories in production agents:

    • Web search: Query search engines and retrieve current information
    • Code execution: Write and run code in a sandboxed environment, use results to inform next steps
    • File operations: Read, write, and modify files — documents, spreadsheets, data files
    • API calls: Interact with external services — CRMs, databases, project management tools, communication platforms
    • Browser control: Navigate web pages, fill forms, extract information
    • Memory: Store and retrieve information across steps within a session, sometimes across sessions

    The combination of these tools is what makes agents capable of genuinely autonomous work. An agent that can search, write code, execute it, check the results, and write findings to a document can complete a research and analysis task that would otherwise require hours of human work — without you steering each step.

    What “Autonomous” Actually Means in Practice

    Autonomous doesn’t mean unsupervised indefinitely. Production agents are typically configured with:

    • Defined scope: The tools the agent can use, the systems it can access, the actions it’s allowed to take
    • Guardrails: Actions that require human confirmation before proceeding — making a payment, sending an email externally, modifying a production database
    • Reporting: Checkpoints where the agent surfaces what it’s done and asks whether to continue

    Autonomy is a dial, not a switch. You set how much the agent handles independently versus checks in. Most production deployments start more supervised and reduce oversight as trust in the agent’s behavior is established.

    Real Production Examples (Not Hypotheticals)

    Concrete examples from confirmed public deployments as of April 2026:

    • Rakuten: Deployed five enterprise Claude agents in one week on Anthropic’s Managed Agents platform — handling tasks across their e-commerce operations including data processing, content tasks, and operational workflows
    • Notion: Background agents that autonomously update workspace pages, synthesize database content, and process meeting notes into structured summaries without manual triggers
    • Sentry: Agents integrated into developer workflows — monitoring error streams, triaging issues, and surfacing relevant context to engineers
    • Asana: Project management agents that update task statuses, synthesize project health, and move work items based on defined triggers

    These are not pilots. These are production systems handling real operational load.

    How They’re Built

    An agent is built from three components:

    1. A language model: The reasoning layer — the part that decides what to do next, interprets tool results, and determines when the task is complete
    2. Tools: The action layer — APIs, code execution environments, file systems, or anything else the model can call to take action in the world
    3. Orchestration: The loop that connects them — manages the sequence of model calls and tool executions, maintains state between steps, handles errors

    Historically, builders had to construct the orchestration layer themselves — a significant engineering investment. Hosted platforms like Claude Managed Agents handle the orchestration layer, letting builders focus on defining the agent’s goals, tools, and guardrails rather than the mechanics of running the loop.

    What Agents Are Not Good At (Yet)

    Honest calibration on current limitations:

    • Long-horizon planning with many unknowns: Agents perform best on tasks with relatively defined scope. Open-ended exploratory work over many days with fundamentally uncertain requirements is still better handled by humans in the loop at each major decision point.
    • Tasks requiring physical world interaction: No production general-purpose physical agent exists. Software agents operating through APIs and interfaces are the current state.
    • Tasks where errors are catastrophic: Agents make mistakes. For any irreversible, high-stakes action — financial transactions, production data modifications, external communications to important relationships — human confirmation steps should remain in the loop.

    For how hosted agent infrastructure works: Claude Managed Agents FAQ. For the difference between agents and chatbots: AI Agents vs. Chatbots, Automations, and APIs. For an SMB-focused explanation: AI Agents Explained for Business Owners.


    For pricing specifics on hosted agent infrastructure: Claude Managed Agents Complete Pricing Reference.

  • Claude Managed Agents vs. OpenAI Agents API — A Direct Comparison

    Claude Managed Agents vs. OpenAI Agents API — A Direct Comparison

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    You’re evaluating hosted agent infrastructure. Both Anthropic and OpenAI have one. Before you commit to either, here’s what’s actually different — not the marketing version, the architectural and pricing version.

    Bottom Line Up Front

    If your stack is Claude-native and you want to get to production fast without building orchestration infrastructure, Managed Agents is hard to beat. If you need multi-model flexibility or have OpenAI deeply embedded in your stack, the calculus changes. Lock-in is real on both sides.

    What Each Product Is

    Claude Managed Agents

    Anthropic’s hosted runtime for long-running Claude agent work. You define an agent (model, system prompt, tools, guardrails), configure a cloud environment, and launch sessions. Anthropic handles sandboxing, state management, checkpointing, tool orchestration, and error recovery. Launched April 8, 2026 in public beta.

    OpenAI Agents API

    OpenAI’s hosted agent infrastructure layer, launched earlier in 2026. Provides similar capabilities: hosted execution, tool integration, multi-agent coordination. Supports multiple OpenAI models (GPT-4o, o1, o3, etc.).

    Model Flexibility

    Managed Agents: Claude models only. Sonnet 4.6 and Opus 4.6 are the primary options for agent work. No multi-model mixing within the managed infrastructure.

    OpenAI Agents API: OpenAI models only, but a wider current model lineup (GPT-4o, o1, o3-mini depending on task). Also Claude-only within its own ecosystem — not multi-model in the cross-provider sense.

    The practical implication: If your evaluation is “I want the best model for this specific task regardless of provider,” neither hosted solution gives you that. Both lock you to their provider’s models. The multi-model comparison matters for self-hosted frameworks (LangChain, etc.), not for managed hosted solutions.

    Pricing Structure

    Claude Managed Agents: Standard Claude token rates + $0.08/session-hour of active runtime. Idle time doesn’t bill. Code execution containers included in session runtime — not separately billed.

    OpenAI Agents API: Standard OpenAI token rates + usage-based tooling costs. Pricing structure varies by tool and model tier. Verify current rates at OpenAI’s pricing page — rates have changed multiple times as their agent products have evolved.

    Direct comparison difficulty: Without modeling the same specific workload against both providers’ current rates, headline comparisons mislead. Token rates differ by model, model capabilities differ, and “session runtime” isn’t a category OpenAI uses. Model the workload, not the headline number.

    Infrastructure and Lock-In

    Both solutions create meaningful lock-in. This isn’t a criticism — it’s an honest description of the trade-off you’re making:

    Claude Managed Agents lock-in: Your agents run on Anthropic’s infrastructure with their tools, session format, sandboxing model, and checkpointing. Migrating to OpenAI’s Agents API or self-hosted infrastructure requires rearchitecting session management, tool integrations, and guardrail logic. One developer’s reaction at launch: “Once your agents run on their infra, switching cost goes through the roof.”

    OpenAI Agents API lock-in: Symmetric. Same dynamic in reverse. OpenAI’s session format, tool integration patterns, and infrastructure assumptions create equivalent switching costs to move to Anthropic’s platform.

    The honest framing: You’re not choosing “open” vs. “locked.” You’re choosing which provider’s lock-in you’re more comfortable with, given your existing infrastructure, model preferences, and vendor relationship.

    Data Sovereignty

    Both solutions run your data on provider-managed infrastructure. Neither currently offers native on-premise or multi-cloud deployment for the managed hosted layer. For companies with strict data sovereignty requirements, this is a parallel constraint on both platforms — not a differentiator.

    Production Track Record

    Claude Managed Agents: Launched April 8, 2026. Production users at launch: Notion, Asana, Rakuten (5 agents in one week), Sentry, Vibecode, Allianz. Anthropic’s agent developer segment run-rate exceeds $2.5 billion.

    OpenAI Agents API: Earlier launch gives more time in production, but the product has been revised significantly since initial release. Longer production history, but also more legacy architectural assumptions baked in.

    When to Choose Claude Managed Agents

    • Your stack is already Claude-native (you’re using Sonnet or Opus for most model calls)
    • You want to reach production without building orchestration infrastructure
    • Your tasks are long-running and asynchronous — the session-hour model fits naturally
    • The Notion, Asana, or Sentry integrations are relevant to your workflow
    • You want Anthropic’s specific safety and reliability guarantees

    When to Consider OpenAI’s Agents API Instead

    • Your stack is already heavily OpenAI-integrated (GPT-4o for primary model work, existing tool integrations)
    • You need access to reasoning models (o1, o3) for specific task types — Anthropic’s equivalent is Claude’s extended thinking, which has different characteristics
    • The specific tool integrations in OpenAI’s ecosystem are better matched to your stack
    • You want more production time at scale before committing to a platform

    When to Use Neither (Self-Hosted Frameworks)

    LangChain, LlamaIndex, and similar self-hosted frameworks remain viable — and better — when you genuinely need multi-model flexibility, on-premise execution, or tighter loop control than either hosted solution provides. The trade-off is engineering effort: months of infrastructure work that Managed Agents or OpenAI’s API eliminates.

    Complete pricing breakdown: Claude Managed Agents Pricing Reference. All Managed Agents questions: FAQ Hub. Enterprise deployment example: Rakuten: 5 Agents in One Week.