Category: Restoration Intelligence

The definitive resource for restoration company operators — business operations, marketing, estimating, AI, and growth strategy.

  • Water Damage Restoration: Scope 3 Emissions Mapping and Calculation Guide

    Water Damage Restoration: Scope 3 Emissions Mapping and Calculation Guide

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    This guide is the working document for calculating Scope 3 greenhouse gas emissions from water damage mitigation jobs under the Restoration Carbon Protocol. It contains the actual emission factors, the calculation methodology for each Scope 3 category, and a complete worked example from a real job type. A contractor who follows this guide will produce a per-job carbon figure that is defensible in a third-party ESG audit.

    Job Classification: Why It Matters Before You Calculate

    Your emissions total will vary by a factor of 10 or more depending on water category and drying class. Before calculating, classify the job correctly using IICRC S500 definitions:

    Category Source Emissions Driver Typical Total Range
    Cat 1 / Class 1–2 Clean supply water, limited area Transportation dominant 0.1–0.5 tCO2e
    Cat 2 / Any class Gray water (washing machine, dishwasher, toilet overflow without feces) Materials + transportation 0.3–1.5 tCO2e
    Cat 3 / Any class Black water (sewage, floodwater, standing water) Hazmat disposal + transportation 1.0–8.0 tCO2e
    Cat 3 / Class 3–4 Black water, large affected area requiring demolition All four categories significant 3.0–12.0 tCO2e

    Category 4: Transportation Emissions

    Transportation is typically the largest or second-largest emission source on water damage jobs. Calculate every vehicle separately.

    Emission Factors (EPA Mobile Combustion, 2024)

    Vehicle Type Fuel kg CO2e per mile Source
    Passenger car / cargo van Gasoline 0.355 EPA Table 2
    Light-duty truck (crew cab, work van) Gasoline 0.503 EPA Table 2
    Light-duty truck Diesel 0.523 EPA Table 2
    Medium-duty truck (equipment trailer) Diesel 1.084 EPA Table 2
    Heavy-duty truck (dump truck, tanker) Diesel 1.612 EPA Table 2
    Heavy-duty truck (loaded, waste hauling) Diesel 2.25 EPA Table 2 + load factor

    Calculation formula: Vehicle miles × emission factor = kg CO2e. Convert to tCO2e by dividing by 1,000.

    What counts as “vehicle miles”: Round-trip distance from your facility or previous job to the loss site, multiplied by the number of trips. Include equipment pickup trips, progress check visits, and equipment retrieval trips. Do not include the vehicle miles of subcontractors — their emissions are captured in their own RCP calculation.

    Category 1: Materials Emissions

    Emission Factors for Common Water Damage Materials

    Material Unit kg CO2e per unit Source
    Quaternary ammonium antimicrobial (liquid) Liter 2.8 EPA EEIO — Chemical manufacturing
    Hydrogen peroxide-based antimicrobial Liter 1.9 EPA EEIO — Chemical manufacturing
    Desiccant drying agent (silica gel) kg 1.4 EPA EEIO — Chemical manufacturing
    Disposable Tyvek suit (Category B) Each 1.2 EPA EEIO — Apparel manufacturing
    Nitrile gloves (pair) Pair 0.3 EPA EEIO — Rubber/plastics
    N95 respirator Each 0.4 EPA EEIO — Medical equipment
    P100 half-face respirator cartridge (pair) Pair 0.8 EPA EEIO — Medical equipment
    6-mil polyethylene sheeting 0.55 EPA EEIO — Plastics product manufacturing
    HEPA filter (air scrubber, standard) Each 3.2 EPA EEIO — Industrial machinery

    Note on antimicrobial volumes: If you don’t track liters applied per job, use these application rate proxies: Cat 2 jobs — 0.015 liters per sq ft of affected area. Cat 3 jobs — 0.025 liters per sq ft (double application typically required).

    Category 5: Waste Emissions

    Emission Factors by Waste Type and Disposal Method

    Waste Type Disposal Method tCO2e per ton Source
    Mixed C&D debris (non-hazardous) Landfill 0.16 EPA WARM v16
    Contaminated porous materials (Cat 2) Landfill (standard) 0.18 EPA WARM v16 + contamination premium
    Contaminated porous materials (Cat 3) Landfill (regulated) 0.22 EPA WARM v16 + hazmat transport
    Disposable PPE and consumables Landfill 0.25 EPA WARM v16 — mixed plastics
    Contaminated water (Cat 3) Municipal wastewater treatment 0.000272 per liter EPA WARM v16 — wastewater treatment
    Contaminated water (Cat 3) Permitted treatment facility (tanker) 0.000272 per liter + transport EPA WARM + tanker transport

    Estimating waste weight when you don’t have disposal receipts: Use 2.5 lbs per sq ft of demolished drywall (standard 1/2″ drywall), 3.0 lbs per sq ft of demolished flooring (carpet + pad), 0.8 lbs per sq ft of demolished wood subfloor. For Cat 3 contaminated water: estimate from extractor tank fill cycles × tank capacity.

    Category 12: Demolished Building Materials

    Material tCO2e per ton (landfill) tCO2e per ton (recycled) Source
    Gypsum drywall 0.16 0.02 EPA WARM v16
    Carpet + pad 0.33 0.05 EPA WARM v16
    Hardwood flooring -0.12 (carbon storage credit) -0.18 EPA WARM v16
    Vinyl/LVP flooring 0.28 0.08 EPA WARM v16 — plastics
    Ceramic tile 0.04 0.01 EPA WARM v16 — inert material
    Fiberglass batt insulation 0.33 0.05 EPA WARM v16
    Cellulose insulation 0.06 -0.02 EPA WARM v16
    Dimensional lumber (framing) -0.07 (carbon storage credit) -0.15 EPA WARM v16

    Important: Negative values for wood-based materials reflect carbon storage credits under EPA WARM methodology — lumber and hardwood store carbon that is not immediately released when landfilled. Apply these credits only if the material is being landfilled rather than incinerated.

    Complete Worked Example: Category 2, Class 3 Commercial Water Loss

    Job profile: Washing machine supply line failure, 2,400 sq ft commercial office, second floor. Affected area includes cubicle space and server room (contents moved). Required demolition: 800 sq ft drywall, 600 sq ft carpet. Crew: 2 technicians, 3-day mitigation. Your facility is 24 miles from the job site.

    Category 4 — Transportation

    2 light trucks × 48 miles round trip × 4 trips (initial, day 2, day 3, equipment pickup) = 384 vehicle-miles
    384 × 0.503 kg CO2e/mile = 193 kg CO2e

    1 equipment trailer (dehumidifiers, air movers) × 48 miles × 2 trips (drop-off + pickup) = 96 vehicle-miles
    96 × 1.084 kg CO2e/mile = 104 kg CO2e

    1 dump truck for debris × 14 miles to transfer station × 1 trip = 14 vehicle-miles
    14 × 2.25 kg CO2e/mile = 32 kg CO2e

    Equipment power source: building electrical supply (Scope 2 — property owner, not included here)

    Category 4 total: 329 kg CO2e = 0.33 tCO2e

    Category 1 — Materials

    Quaternary ammonium antimicrobial: 2,400 sq ft × 0.015 L/sq ft = 36 liters × 2.8 kg CO2e/L = 101 kg CO2e

    PPE: 2 technicians × 3 days × 2 Tyvek suits/day = 12 suits × 1.2 kg = 14 kg; 2 × 3 × 4 glove pairs = 24 pairs × 0.3 kg = 7 kg; 2 × 3 × 2 N95 = 12 respirators × 0.4 kg = 5 kg. PPE total: 26 kg CO2e

    HEPA filter replacement (2 air scrubbers, 1 filter change each): 2 × 3.2 kg = 6 kg CO2e

    Category 1 total: 133 kg CO2e = 0.13 tCO2e

    Category 5 — Waste

    C&D debris (wet materials, Cat 2 contaminated): estimated 1.2 tons (800 sq ft drywall at 2.5 lbs/sq ft = 1,000 lbs; carpet remnants ~400 lbs)
    1.2 tons × 0.18 tCO2e/ton = 0.22 tCO2e

    Disposable PPE and consumables: ~0.05 tons × 0.25 tCO2e/ton = 0.01 tCO2e

    Category 5 total: 0.23 tCO2e

    Category 12 — Demolished Building Materials

    800 sq ft drywall demolished: 800 × 2.5 lbs = 2,000 lbs = 0.91 tons × 0.16 tCO2e/ton = 0.15 tCO2e

    600 sq ft carpet + pad: 600 × 3.0 lbs = 1,800 lbs = 0.82 tons × 0.33 tCO2e/ton = 0.27 tCO2e

    Category 12 total: 0.42 tCO2e

    Job Total

    Category tCO2e
    Category 4 — Transportation 0.33
    Category 1 — Materials 0.13
    Category 5 — Waste disposal 0.23
    Category 12 — Demolished materials 0.42
    Total 1.11 tCO2e

    This figure — 1.11 tCO2e — is what goes in the Category 4, 1, 5, and 12 rows of the RCP Job Carbon Report delivered to the property manager. The spend-based estimate for a $28,000 job like this (using EPA Services to Buildings factor of approximately 0.10 kg CO2e per dollar) would produce 2.8 tCO2e — more than 2.5x the actual calculated figure. This is why primary data matters.

    What is the single most important data point to capture for accurate water damage Scope 3 calculation?

    Vehicle mileage. Transportation is typically the largest single emission source and is the most accurately calculated when mileage is documented. All other data points can be estimated from proxies, but vehicle mileage should be captured from actual dispatch records or GPS fleet data for every job.

    Can I use the same emission factors for all antimicrobial products?

    The EPA EEIO factor for chemical manufacturing (2.8 kg CO2e/liter for quaternary ammonium compounds) is an appropriate default for most antimicrobial treatments. Hydrogen peroxide-based products have a lower factor (1.9 kg CO2e/liter). If your company has specific product lifecycle assessment data, use that in place of the EEIO factor and note the source in your data quality section.

    How do I handle a multi-week job that spans two calendar years?

    Calculate total emissions for the full job and report the portion attributable to each calendar year based on the percentage of work performed in each year. For most clients, the simpler approach is to report the full job total in the year the job was completed — check with your client’s ESG team which convention they prefer for their Scope 3 inventory.


    Antimicrobial and Chemical Emission Factors: Updated Methodology

    The EPA EEIO chemical manufacturing factor used in the Category 1 table above is an economic input-output proxy — useful for estimation but not sourced to the actual chemistry. The following replaces or supplements those values where peer-reviewed lifecycle data now exists.

    Hydrogen Peroxide-Based Antimicrobials

    H₂O₂ is the only restoration antimicrobial with published lifecycle assessment data. The anthraquinone auto-oxidation production process yields 1.33 kg CO₂e per kg of active H₂O₂ (ACS Omega, 2025); the ecoinvent European market average is 1.79 kg CO₂e per kg based on eight producers. For diluted restoration products (typically 3–7.5% concentration), the per-liter emission scales proportionally. A gallon of 7.5% H₂O₂ antimicrobial contains approximately 0.28 kg of active ingredient, yielding roughly 0.37–0.50 kg CO₂e per gallon of diluted product — substantially lower than the EPA EEIO proxy of 1.9 kg CO₂e/liter previously used. Update your calculations accordingly.

    Quaternary Ammonium Compounds (QACs)

    No cradle-to-gate lifecycle assessment has been published for quaternary ammonium compound production as of April 2026. QACs are petrochemical-derived surfactants manufactured via chloromethane reactions with tertiary amines. The EPA EEIO factor of 2.8 kg CO₂e/liter remains the only available proxy. Flag all QAC calculations as EPA EEIO estimated in the data_quality section of any RCP Job Carbon Report delivered to clients facing SBTi or CSRD verification requirements. The RCP will update this factor when manufacturer-specific LCA data becomes available.

    Botanical Antimicrobials (Thymol-Based Products)

    Products such as Benefect Decon 30 (thymol active ingredient) carry USDA BioPreferred certification and UL EcoLogo status but no published LCA emission factor as of April 2026. Essential oil distillation is energy-intensive with extremely low extraction yields (1–2% from plant material). The RCP treats botanical antimicrobials as a data gap requiring manufacturer EPD documentation. In the absence of manufacturer data, apply the QAC proxy (2.8 kg CO₂e/liter) and flag as estimated.


    Truck-Mounted Extraction Unit: Fuel Consumption Reference Data

    Truck-mounted extraction units operate on dedicated gasoline or diesel engines separate from the vehicle drivetrain. The fuel consumed during extraction operations is a direct Domain 2 Category 4 emission source. Manufacturer specifications and field-reported consumption rates:

    Unit / Engine Fuel Consumption Rate kg CO₂ per hour
    Prochem Peak 500 (Kawasaki FD851D-DFI, 31 HP) Gasoline ~1.0 gal/hr 8.9
    Prochem Everest 870HP (Kubota 75 HP) Gasoline 1.5–2.5 gal/hr 13.3–22.2
    Standard slide-in truckmount (industry consensus) Gasoline ~1.0 gal/hr 8.9
    PTO-driven van-powered (e.g., HydraMaster CDS 4.8) Gasoline +1–2 gal/hr above idle 8.9–17.8 (incremental)

    RCP proxy for truck mount extraction: 1.0 gallon gasoline per hour of extraction unit operation (8.9 kg CO₂ per hour). A 4-hour extraction job on a standard truckmount generates approximately 35.5 kg CO₂ from the unit alone — independent of the vehicle transportation emissions calculated in Domain 2. Log extraction start/stop times in the job record.

    Capture actual fuel consumption from fuel receipts where possible. Where runtime-only is documented, apply the proxy. Flag as proxy in the data_quality section.


    Refrigerant Considerations: LGR Dehumidifiers and Fugitive Emissions

    Commercial LGR dehumidifiers contain refrigerant charges that are potential Scope 3 emission sources if units are serviced, recharged, or have fugitive leaks. This is not a required RCP data point in v1.0 but is disclosed here for methodological completeness and for contractors with SBTi-committed clients.

    Refrigerant Charge Data by Unit Type

    Unit Refrigerant GWP-100 (AR6) Approx. Charge
    Phoenix DryMAX XL (125 ppd) R-410A 2,256 ~0.68 kg
    Phoenix DryMAX (80 ppd) R-410A 2,256 ~0.54 kg
    Dri-Eaz Revolution LGR (140 ppd) R-410A 2,256 ~0.60 kg
    Dri-Eaz LGR 6000i R-32 771 Not published

    The Dri-Eaz LGR 6000i is the first major restoration dehumidifier using R-32, a refrigerant with a GWP of 771 under IPCC AR6 — representing a 63–67% reduction in refrigerant climate impact compared to R-410A units. This is relevant for the RCP Carbon Reduction Playbook: equipment replacement cycles that prioritize R-32 or R-454B (GWP ~530) units over R-410A materially reduce the fugitive emission exposure of a restoration fleet.

    Fugitive emission screening: The EPA default annual leak rate for sealed hermetic refrigeration equipment (residential/commercial A/C) is 10% of charge capacity. For a Dri-Eaz Revolution LGR with 0.60 kg R-410A at the 10% screening rate, the annual fugitive contribution would be 0.06 kg × 2,256 GWP = 135 kg CO₂e per unit per year. Actual leak rates for sealed hermetic dehumidifier compressors are likely 1–5% annually. Contractors are not required to calculate refrigerant emissions under RCP v1.0 but should document unit refrigerant type for RCP v1.1 compliance.


    Wastewater Extraction: Methodological Note

    Extracted water discharged to municipal sanitary sewer generates indirect emissions at the wastewater treatment facility. Based on Metropolitan Water Reclamation District energy intensity data (Elevate Energy, 2018), the national average wastewater treatment energy intensity is approximately 1,978 kWh per million gallons treated, yielding 0.00074 kg CO₂e per gallon discharged at the national grid emission factor. A typical water damage extraction of 500–2,000 gallons produces only 0.37–1.48 kg CO₂e for wastewater treatment — under 0.5% of total job emissions on most jobs. The RCP excludes this source from required calculation in v1.0 but acknowledges it here for methodological completeness and CSRD-grade reporting contexts.


  • Introducing the Restoration Carbon Protocol: An Industry Self-Standard for Scope 3 Reporting

    Introducing the Restoration Carbon Protocol: An Industry Self-Standard for Scope 3 Reporting

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    There is no industry standard for how a restoration contractor should calculate, document, and report the carbon emissions from their work. Not from IICRC. Not from RIA. Not from any trade association or certifying body in the restoration industry.

    That absence is becoming a problem. Commercial property managers are facing mandatory Scope 3 emissions disclosures — and restoration contractor activity is squarely in their value chain. Insurance carriers are building ESG criteria into preferred vendor programs. FEMA and federal contracting bodies are increasingly asking about emissions documentation for large-scale disaster response contracts.

    When your clients need Scope 3 data from you and there’s no standard for what that data should include or how it should be calculated, everyone loses. The property manager files an inaccurate disclosure. The contractor gets treated as a data gap. The auditor flags the methodology. Nobody benefits.

    The Restoration Carbon Protocol exists to fix that.

    What the Restoration Carbon Protocol Is

    The Restoration Carbon Protocol (RCP) is an industry self-standard for Scope 3 emissions calculation, documentation, and reporting specific to property restoration work. It is built on the GHG Protocol Corporate Value Chain Standard — the globally accepted framework for Scope 3 accounting — and adapted to the specific job types, material categories, waste streams, and operational patterns of the restoration industry.

    RCP v1.0 will cover five core restoration job types: water damage mitigation, fire and smoke restoration, mold remediation, asbestos and hazmat abatement, and biohazard cleanup. For each job type, the protocol defines:

    • Which GHG Protocol Scope 3 categories are relevant
    • What data points need to be captured per job
    • What calculation methodology to use for each emissions source
    • What emission factors apply, sourced from EPA, DEFRA, and ecoinvent databases
    • What the output format looks like for client delivery

    The output is a per-job carbon report — a standardized one-page document any restoration contractor can complete and provide to their commercial clients for their GRESB, CDP, or SB 253 disclosure.

    Why a Self-Standard and Not a Trade Association Standard

    Trade association standards take years to develop through committee processes. The 2027 deadline doesn’t allow for that timeline. Commercial property managers need something workable now — in 2025 and 2026, as they build their data collection infrastructure ahead of the first required filings.

    A published, rigorous, publicly available self-standard that is built on GHG Protocol methodology and uses credible emission factors is more useful to the market right now than a committee process that might produce something better in 2028. The goal of RCP is not to be the final word — it’s to be the first rigorous word, and to create the foundation that a trade association standard can build on when the bandwidth exists.

    Self-published standards have established category leadership in other industries. The GHG Protocol itself started as a self-published standard by the World Resources Institute and the World Business Council for Sustainable Development before becoming the global norm. The precedent for rigorous self-published standards setting the terms of an industry conversation is well-established.

    The 30-Day Build

    RCP v1.0 is being built over 30 days through a structured series of knowledge nodes — each one establishing a piece of the technical framework, validated against GHG Protocol methodology, and published here on Tygart Media as it’s completed.

    The publication sequence runs from foundation (what Scope 3 is and why it matters for restoration) through technical framework (job-type-specific calculation methodologies) to commercial application (how to use the framework with clients and in RFP responses) to the full framework document publication.

    The Restoration Golf League network of independent restoration contractors will serve as the pilot cohort — providing feedback on the calculation methodology, testing the per-job carbon report format against their actual job data, and validating that the framework is workable for contractors who are running businesses, not sustainability departments.

    How to Get Involved

    If you are a restoration contractor who wants to be involved in the RCP pilot, a commercial property manager looking for Scope 3 data from your restoration vendor network, an ESG consultant working with commercial real estate clients, or an insurance carrier building ESG criteria into your preferred vendor program — this standard is being built with your needs in mind.

    The RCP framework will be published open-access. The knowledge nodes building toward it are published here as they’re completed. Follow along, contribute feedback, and contact Tygart Media if you want to be part of the pilot cohort that validates the framework before v1.0 publication.

    What is the Restoration Carbon Protocol?

    An industry self-standard for calculating, documenting, and reporting Scope 3 emissions from property restoration work. Built on GHG Protocol methodology, covering five core restoration job types, producing a standardized per-job carbon report that contractors can provide to commercial clients for their ESG disclosures.

    Who is building the Restoration Carbon Protocol?

    Tygart Media, in collaboration with the Restoration Golf League contractor network. The framework is being developed through a 30-day structured publication process with input from restoration contractors, commercial property managers, and ESG practitioners.

    Why isn’t a trade association building this standard?

    Trade association standards take years through committee processes. The 2027 deadline requires something workable now. A rigorous self-published standard built on GHG Protocol methodology creates the foundation that a formal trade association process can build on.

    Will the RCP be free to use?

    Yes. The framework will be published open-access. The goal is adoption, not monetization of the standard itself. Value accrues to contractors who adopt it early and build it into their commercial service offering.


  • The 2027 Deadline: What California SB 253 Means for Your Restoration Business

    The 2027 Deadline: What California SB 253 Means for Your Restoration Business

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    California Senate Bill 253 — the Climate Corporate Data Accountability Act — is the most significant climate disclosure law in US history. It applies to public and private companies with over $1 billion in annual revenue that do business in California. It requires them to disclose Scope 1 and 2 emissions starting in 2026 and Scope 3 emissions starting in 2027. More than 5,000 companies fall within its scope.

    Those companies include most of the institutional property owners, REITs, hospital systems, hotel chains, university systems, and commercial real estate operators that hire restoration contractors for their facilities. When they disclose their Scope 3 emissions in 2027, your work will be part of what they’re accounting for.

    What SB 253 Actually Requires

    SB 253 requires covered companies to publish annual GHG emissions reports, verified by an independent third party, using the GHG Protocol Corporate Standard methodology. The Scope 3 reporting requirement — which takes effect for the 2027 reporting year — means companies must inventory and disclose emissions across all relevant value chain categories, including emissions from their contractors and suppliers.

    The California Air Resources Board (CARB) is developing implementing regulations that will specify the exact requirements. What’s already clear from the statute is that companies cannot simply exclude contractor emissions because data is hard to collect — they must make good-faith efforts to obtain primary data from their supply chain, and where primary data isn’t available, they must use approved estimation methodologies.

    The third-party verification requirement is significant. Unlike voluntary ESG reporting where companies self-certify their numbers, SB 253 disclosures will be reviewed by independent auditors. That means the quality of the underlying data — including contractor-provided emissions data — will be scrutinized in a way it hasn’t been before.

    The Timeline That Matters for Contractors

    The 2027 reporting year means companies will begin collecting 2027 emissions data in early 2027 and filing reports by the deadline established in CARB regulations. To provide verified, primary-data emissions figures from their restoration contractors, property managers need to have data collection processes in place before the jobs happen — not after.

    That means the real action window for restoration contractors is now. Property managers who are serious about their SB 253 compliance are already building vendor data collection systems and ESG questionnaires. Contractors who can respond to those questionnaires with actual per-job emissions data will be in a materially different position than contractors who can’t.

    The companies that are largest in terms of SB 253 coverage — large REITs, national property management companies, institutional operators — are the ones most likely to make ESG data capability a formal criterion in vendor selection. They’re also the clients where losing a preferred vendor designation costs the most.

    What SB 253 Means Beyond California

    California’s disclosure laws have historically set national standards. SB 253 applies to companies “doing business in California” — which includes companies headquartered elsewhere that have California operations or customers. Many of the large commercial real estate operators that SB 253 covers operate nationally, which means their vendor data requirements will apply nationally even if the law itself is California-specific.

    The EU’s Corporate Sustainability Reporting Directive (CSRD) is already in effect and is pulling US companies with European operations into Scope 3 reporting as well. The direction of travel is global and accelerating regardless of what happens with US federal climate policy.

    For restoration contractors that do any commercial work with institutional property owners, the 2027 deadline should be on their planning horizon now — not in 2026 when their largest clients are scrambling to collect data before the filing deadline.

    What is California SB 253?

    The Climate Corporate Data Accountability Act, signed in 2023. It requires companies with over $1 billion in annual revenue doing business in California to report Scope 1 and 2 emissions starting 2026 and Scope 3 emissions starting 2027, verified by an independent third party using the GHG Protocol methodology.

    How many companies does SB 253 affect?

    More than 5,000 companies. Critically, the law applies to companies “doing business in California” regardless of where they are headquartered — capturing national and multinational companies with California operations or customers.

    Does SB 253 directly require restoration contractors to report emissions?

    Not directly — the law applies to companies with over $1 billion in revenue. But those companies must collect Scope 3 emissions data from their supply chain, which includes restoration contractors. The obligation on the contractor is indirect but practically significant for commercial work.

    What happens if a restoration contractor can’t provide emissions data to their commercial clients?

    The property manager will use spend-based estimates instead, which are less accurate and more difficult to defend in a third-party audit. Over time, inability to provide primary emissions data is likely to become a disadvantage in commercial vendor selection processes.


  • The GHG Protocol’s 15 Scope 3 Categories: Which Ones Apply to Restoration Work

    The GHG Protocol’s 15 Scope 3 Categories: Which Ones Apply to Restoration Work

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    The GHG Protocol Corporate Value Chain Standard — the framework that governs Scope 3 emissions accounting globally — defines 15 categories of indirect emissions across the upstream and downstream value chain. Understanding which of these categories apply to restoration work is the first step in building a calculation methodology that ESG auditors will accept.

    Restoration work is unusual in that it touches multiple categories simultaneously. A single significant job can generate measurable emissions across four or more categories — which is exactly why restoration needs its own calculation framework rather than a generic contractor template.

    The Four Primary Categories for Restoration Work

    Category 1 — Purchased Goods and Services

    This category covers the emissions associated with producing the goods and services a company purchases. For a commercial property manager hiring a restoration contractor, this means the emissions embedded in everything the contractor uses on the job: antimicrobial treatments, drying agents, HEPA filters, packaging materials, replacement drywall, subflooring materials.

    In practice, Category 1 is the hardest to calculate precisely because it requires knowing the embodied carbon of specific materials. The Restoration Carbon Protocol approach uses established emission factor databases (EPA, ecoinvent) to assign representative values to the most common restoration material categories, allowing contractors to calculate Category 1 contributions from their materials list without commissioning a lifecycle assessment.

    Category 4 — Upstream Transportation and Distribution

    This category covers transportation emissions upstream of the reporting company — meaning the emissions from moving goods and equipment to the job site. For restoration contractors, this primarily means vehicle fleet emissions: the fuel burned driving trucks, vans, and equipment trailers to the loss site and back.

    Category 4 is typically the easiest restoration emissions category to calculate. Vehicle emissions can be calculated from fuel consumption records or from mileage multiplied by vehicle-type emission factors. Most fleet management systems already capture this data.

    Category 5 — Waste Generated in Operations

    This category covers emissions from waste generated during the contractor’s service delivery — the debris, damaged materials, contaminated water, and hazardous materials that restoration work produces and that are disposed of on behalf of the property owner.

    Category 5 is highly variable by job type. A Category 3 water loss with sewage contamination generates different waste streams than a Category 1 clean water extraction. A fire loss generates smoke-contaminated debris with different disposal requirements than mold remediation waste. The Restoration Carbon Protocol maps waste types by job category to appropriate disposal emission factors from EPA and industry waste management data.

    Category 12 — End-of-Life Treatment of Sold Products

    This category applies when restoration work involves removing and disposing of building components — flooring, drywall, insulation, ceiling tiles, cabinetry — that are treated as end-of-life materials. The emissions from disposing of these materials are counted here rather than in Category 5 when the materials originated as “sold products” rather than process waste.

    For large reconstruction-phase restoration projects, Category 12 can be a significant emissions source. The distinction between Category 5 and Category 12 matters for accurate reporting; the Restoration Carbon Protocol provides decision criteria for classifying demolition debris correctly.

    Two Secondary Categories That Apply in Specific Situations

    Category 2 — Capital Goods

    Relevant when restoration work involves the purchase and installation of new equipment on behalf of the property — replacement HVAC components, new water heaters, emergency generators. The embodied carbon of newly installed capital equipment counts under this category for the property manager’s disclosure.

    Category 13 — Downstream Leased Assets

    Relevant for property management companies that own the buildings being restored. When restoration work affects leased spaces and the property manager is accounting for emissions from tenant operations, the restoration work’s contribution to improving (or temporarily worsening) building energy performance can affect Category 13 calculations.

    The Practical Implication for Contractors

    The four primary categories — 1, 4, 5, and 12 — are present in virtually every significant restoration job. A contractor who can calculate and report emissions in these four categories for each job has 85 to 90 percent of what most commercial property managers need for their Scope 3 disclosure.

    The Restoration Carbon Protocol v1.0 focuses exclusively on these four categories, with secondary categories addressed in supplemental guidance. The goal is a framework that produces defensible, auditor-acceptable numbers from data that restoration contractors already capture in their job management systems.

    How many GHG Protocol Scope 3 categories apply to restoration work?

    At minimum four primary categories on most significant jobs: Category 1 (purchased goods and services), Category 4 (upstream transportation), Category 5 (waste generated in operations), and Category 12 (end-of-life treatment of materials). Two additional categories apply in specific situations.

    Which Scope 3 category covers the emissions from driving to job sites?

    Category 4 — Upstream Transportation and Distribution. Vehicle emissions from driving to and from job sites are typically the easiest restoration emissions to calculate and are often the largest single category for smaller jobs.

    How are waste disposal emissions classified?

    Process waste from restoration operations falls under Category 5 (Waste Generated in Operations). Building materials removed and disposed of during reconstruction may fall under Category 12 (End-of-Life Treatment of Sold Products). The Restoration Carbon Protocol provides decision criteria for classifying demolition debris correctly.

    What is the Restoration Carbon Protocol’s approach to Category 1 materials emissions?

    Rather than requiring lifecycle assessments, the RCP uses established emission factor databases (EPA EEIO, ecoinvent) to assign representative carbon intensities to common restoration material categories, allowing calculation from a standard materials list.


  • How Commercial Property Managers Are Counting Your Emissions (Whether You Know It or Not)

    How Commercial Property Managers Are Counting Your Emissions (Whether You Know It or Not)

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    When a commercial property manager reports their Scope 3 emissions to GRESB, CDP, or their California SB 253 auditor, they need to account for the emissions from every significant supplier and contractor in their value chain. That includes their restoration contractors.

    The problem: most restoration contractors don’t track or report their emissions. So property managers are using a fallback method that produces high-uncertainty estimates — and that method systematically misrepresents what restoration work actually emits.

    The Spend-Based Estimation Method

    When primary data — actual measured emissions from a specific supplier — isn’t available, the GHG Protocol allows companies to use a spend-based estimation method. The formula is simple: multiply what you paid a supplier by an industry-average emissions intensity factor (measured in kilograms of CO2 equivalent per dollar spent in that industry), and that becomes your estimate of that supplier’s contribution to your Scope 3.

    For example: a property manager paid a restoration contractor $85,000 for a water damage remediation. Using the EPA’s industry-average emissions factor for “services to buildings and dwellings,” they estimate the Scope 3 emissions from that engagement as approximately 8.5 metric tons of CO2 equivalent.

    That number may be wildly inaccurate. It might be double the actual emissions. It might be half. The spend-based method doesn’t account for job type, geographic location, crew size, equipment used, materials consumed, or waste generated. It treats a $85,000 carpet cleaning the same as an $85,000 Category 3 sewage backup remediation with hazmat disposal — because both cost $85,000.

    Why Property Managers Are Stuck With This Method

    The GHG Protocol is explicit that primary data — actual emissions data provided by the supplier — is preferred over spend-based estimates. Primary data produces more accurate disclosures, reduces auditor scrutiny, and demonstrates genuine supply chain engagement to investors and regulators.

    But primary data requires the contractor to track and report their emissions per job. Almost no restoration contractors do this. So property managers default to spend-based estimates not because they prefer them, but because they have no alternative.

    This creates a specific problem for restoration contractors who want to compete for commercial work: the property manager’s ESG team sees your company as an uncontrolled data gap in their Scope 3 inventory. That’s not a comfortable position to occupy when they’re selecting preferred vendors for their next contract cycle.

    What Happens When You Provide Primary Data

    When a restoration contractor provides actual emissions data per job — even a simple calculation using documented emission factors for their equipment, vehicles, and materials — several things change for the property manager:

    Their Scope 3 disclosure becomes more accurate and more defensible to auditors. Their ESG report can distinguish between a high-emissions fire restoration project and a low-emissions water extraction job, rather than treating them identically based on invoice amount. They can demonstrate to investors and regulators that they have active supply chain engagement on emissions — one of the specific data quality improvements that frameworks like GRESB reward.

    From the contractor’s perspective, providing primary data changes the relationship. You’re no longer a vendor they’re estimating around — you’re a supply chain partner who is actively contributing to the accuracy of their ESG disclosure. That’s a different conversation in a contract renewal discussion.

    The Standard That Doesn’t Exist Yet

    The missing piece is a standardized methodology for calculating restoration-specific emissions per job — one that is rigorous enough for ESG auditors to accept, simple enough for restoration contractors to actually use, and consistent enough that a property manager with multiple restoration vendors can aggregate data from all of them in a compatible format.

    The Restoration Carbon Protocol is being built to be that standard. The goal is a per-job carbon report that any restoration contractor can complete using data they already capture in their job management systems — and that any commercial property manager can plug directly into their GRESB or CDP disclosure without additional processing.

    How do commercial property managers currently estimate restoration contractor emissions?

    Most use a spend-based estimation method — multiplying contractor invoices by industry-average emissions intensity factors from sources like the EPA or EXIOBASE. This produces high-uncertainty estimates that don’t account for job type, equipment, materials, or waste streams specific to restoration work.

    Is spend-based estimation accurate for restoration work?

    No. It treats all restoration spending as equivalent regardless of job type, scope, or actual emissions profile. A $50,000 water extraction and a $50,000 fire debris removal generate very different emissions, but spend-based estimation produces the same number for both.

    Why can’t property managers just ask their restoration contractors for emissions data?

    Most restoration contractors don’t track per-job emissions data and there is no industry standard for what that data should include or how it should be calculated. The Restoration Carbon Protocol is being developed to create that standard.

    What is primary data in Scope 3 reporting?

    Primary data is actual emissions data provided by a supplier, based on measured or calculated emissions from their specific activities. The GHG Protocol prefers primary data over spend-based estimates because it produces more accurate disclosures and is more defensible in audits.


  • What Is Scope 3 and Why Restoration Contractors Need to Care

    What Is Scope 3 and Why Restoration Contractors Need to Care

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    If you run a restoration company and nobody has mentioned Scope 3 emissions to you yet, that’s about to change. Commercial property managers, REITs, hospital systems, and institutional facility directors are all facing mandatory ESG reporting deadlines — and the emissions from the contractors they hire count toward their numbers.

    Your restoration work is in their Scope 3. Whether you know it or not, whether you track it or not, your clients are being asked to account for it.

    The Three Scopes of Greenhouse Gas Emissions

    The Greenhouse Gas Protocol — the internationally accepted standard for carbon accounting — divides emissions into three categories based on where they originate in relation to the reporting organization.

    Scope 1 covers direct emissions from sources the company owns or controls. A property management company’s Scope 1 would include fuel burned in company-owned boilers, generators, and vehicles.

    Scope 2 covers indirect emissions from purchased energy — electricity, steam, heat, and cooling consumed by the organization’s buildings and operations.

    Scope 3 covers everything else: all the indirect emissions that occur in the organization’s value chain, both upstream and downstream. For a commercial real estate company, Scope 3 includes the emissions from construction and renovation work, from tenant operations in leased space, from the materials used in building maintenance — and from the restoration contractors called in when water, fire, or mold damage occurs.

    Scope 3 is where the numbers get large. For commercial real estate, Scope 3 emissions typically account for 85 to 95 percent of total reported emissions. It’s also where the data is hardest to collect — because it requires getting information from dozens or hundreds of vendors, suppliers, and contractors who may not track their own emissions at all.

    Where Restoration Contractors Appear in Scope 3

    The GHG Protocol defines 15 categories of Scope 3 emissions. Restoration work touches several of them simultaneously:

    • Category 1 — Purchased goods and services: The materials your crews use on a job — drying equipment consumables, remediation chemicals, replacement materials — generate upstream emissions that get counted in your client’s Category 1.
    • Category 4 — Upstream transportation and distribution: The emissions from driving your trucks to the job site, hauling equipment, and transporting waste to disposal facilities.
    • Category 5 — Waste generated in operations: The debris, contaminated materials, and hazardous waste generated during restoration work that gets disposed of on behalf of the property owner.
    • Category 12 — End-of-life treatment of sold products: Applies when restoration involves removing and disposing of building materials — flooring, drywall, insulation — on behalf of the property.

    A single significant water loss job touches all four of these categories. A large fire restoration project may touch additional categories depending on the scope of reconstruction work involved.

    Why This Is a 2027 Problem for Your Business

    California Senate Bill 253 — the Climate Corporate Data Accountability Act — requires companies with more than $1 billion in annual revenue doing business in California to report Scope 1 and 2 emissions starting in 2026 and Scope 3 emissions starting in 2027. More than 5,000 companies are within scope of this law.

    The EU Corporate Sustainability Reporting Directive (CSRD) is already in effect, with Scope 3 reporting requirements phasing in through 2027 for large European companies — many of which own commercial real estate and operate facilities in the United States.

    What this means practically: the commercial property managers, REITs, hospital systems, and institutional facility directors who hire restoration contractors are right now trying to figure out how to collect Scope 3 emissions data from their vendor base. They need that data to file required disclosures. If you can provide it — in a structured, consistent, usable format — you become a preferred vendor. If you can’t provide it, you become a data gap they need to work around.

    The Gap the Restoration Industry Has Not Addressed

    No major restoration trade association — not IICRC, not RIA, not RCAT — has published a Scope 3 reporting standard for restoration contractors. There is no industry-agreed methodology for calculating the emissions contribution of a water damage job, a fire restoration project, or a mold remediation. There is no standard job carbon report format that a contractor can provide to a property manager for their ESG disclosure.

    This is the void the Restoration Carbon Protocol is designed to fill. In the absence of an industry standard, each commercial property manager is either making up their own methodology, using generic spend-based estimates with high uncertainty, or simply leaving restoration contractor emissions out of their disclosure and hoping their auditors accept it.

    None of those options serve the property manager. None of them serve the contractor. And none of them serve the goal of accurate climate disclosure.

    The restoration industry has an opportunity to lead here — to define the standard before regulators or clients define it for them, and to make that standard one that is actually workable for contractors who are focused on doing restoration work, not filing emissions reports.

    What are Scope 3 emissions?

    Scope 3 emissions are indirect greenhouse gas emissions that occur in an organization’s value chain — from the goods and services they purchase, the transportation of those goods, the waste generated in their operations, and the activities of their contractors and suppliers. For commercial real estate, Scope 3 typically accounts for 85–95% of total reported emissions.

    Do restoration contractors’ emissions count in their clients’ Scope 3?

    Yes. Restoration work generates emissions from vehicle transportation, equipment fuel use, materials consumption, and waste disposal — all of which fall under specific GHG Protocol Scope 3 categories that commercial property managers are required to report.

    When do commercial property managers need to report Scope 3 emissions?

    California SB 253 requires Scope 3 reporting starting in 2027 for companies with over $1 billion in revenue doing business in California. EU CSRD is already phasing in Scope 3 requirements. Many institutional investors and ESG frameworks (GRESB, CDP) already request Scope 3 data from their portfolio companies.

    Is there currently a Scope 3 reporting standard for restoration contractors?

    No. No major restoration trade association has published a Scope 3 calculation methodology or reporting standard for restoration work. The Restoration Carbon Protocol (RCP) is being developed to fill this gap.



  • Build Your Own KnowHow — And Then Go Further

    Build Your Own KnowHow — And Then Go Further

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    KnowHow is one of the most important things happening in the restoration industry right now. If you’re not familiar with it: it’s an AI-powered platform that takes your company’s operational knowledge — your SOPs, your onboarding materials, your hard-won process documentation — and turns it into an on-demand resource every team member can access from their phone. Your best technician’s knowledge stops walking out the door when they leave. Your new hire in Iowa follows the same protocol as your veteran in Texas. Your managers stop being human FAQ machines.

    It solves a real problem that has cost restoration companies enormous amounts of money in inconsistent work, slow onboarding, and institutional knowledge that evaporates with turnover.

    But KnowHow solves the internal problem. The knowledge stays inside your organization. And there is a second problem — the external one — that nobody has solved yet.

    The Internal Problem vs. The External Problem

    The internal problem is: your people don’t have access to what your company knows when they need it. KnowHow fixes that. The knowledge becomes accessible, searchable, consistent, and deliverable at scale across every location and every shift.

    The external problem is different: your clients, prospects, and contracting authorities have no way to verify that your company knows what it claims to know. They can read your capabilities statement. They can check your certifications. They can call references. But they can’t look inside your organization and confirm that your documented protocols are current, specific, and actually practiced — not just written down for the sake of winning a bid.

    In commercial restoration, that verification gap is expensive. Facility managers, FEMA contracting officers, insurance carriers, and national property management companies are making vendor decisions based on trust signals that are largely unverifiable. The company with the best pitch often wins over the company with the best protocols.

    An external knowledge API changes that dynamic completely.

    What an External Knowledge API Actually Is

    An external knowledge API is a structured, authenticated, publicly accessible feed of your operational knowledge — not your trade secrets, not your pricing, not your internal communications, but your documented protocols, your methodology, your standards, and your verified expertise. Published. Structured. Machine-readable. Available to anyone who needs to evaluate whether your company is the right partner for a complex job.

    Think of it as the difference between telling a client “we follow IICRC S500 water damage protocols” and showing them a live, structured endpoint where they can pull your actual documented water mitigation process — with timestamps that confirm it was updated last month, not in 2019.

    The internal KnowHow platform is the source. The external API is the window — carefully curated, access-controlled, and designed to answer the questions that matter to the people evaluating you.

    Who Cares About Your External Knowledge

    The list is longer than most restoration contractors realize.

    Commercial property managers and facility directors. A national hotel chain or healthcare system evaluating restoration vendors for their approved vendor program needs more than a certificate of insurance and a reference list. They want to know that your protocols are consistent across every job, that your team follows the same process whether the project manager is on-site or not, and that your documentation standards will hold up in a claim. An external knowledge feed — showing your water damage, fire damage, and mold remediation protocols in structured, current form — answers those questions before the conversation even starts.

    FEMA and government contracting. Federal disaster response contracts are awarded to companies that can demonstrate organizational capability at scale. The RFP process rewards documentation. A company that can point to an externally published, structured knowledge base as evidence of their operational maturity is presenting something most competitors don’t have. It’s not just a differentiator — it’s proof of the kind of institutional infrastructure that large government contracts require.

    Insurance carriers and TPAs. Third-party administrators and carrier programs are increasingly using AI tools to evaluate and route claims to preferred vendors. A restoration company whose documented protocols are structured and machine-readable — available for an AI system to pull and verify against claim requirements — is positioned for the way preferred vendor selection is heading, not the way it used to work.

    Commercial real estate and institutional property owners. REITs, hospital systems, university facilities departments, and large corporate real estate portfolios are all moving toward vendor relationships that have verifiable documentation standards. An external knowledge API gives them something they can actually audit — not just a sales presentation.

    How to Build It: The Two-Layer Stack

    The stack that makes this work has two layers, and KnowHow already gives you the first one.

    Layer one — internal capture and organization (KnowHow’s job). Use KnowHow, or an equivalent internal knowledge platform, to capture and organize your operational knowledge. Document your protocols rigorously. Keep them current. Assign ownership so they don’t go stale. The discipline required here is real, but it’s also the discipline that makes your company better operationally regardless of what you do with the knowledge externally. This layer is the foundation.

    Layer two — external publication and API distribution (the next layer). Select the knowledge that is appropriate to share externally — your methodology, your standards, your certifications, your documented approach to specific job types — and publish it in a structured, consistently maintained form. This can be as simple as a well-organized section of your company website with current protocol documentation, or as sophisticated as a full REST API endpoint that clients and AI systems can query directly. The key requirements are structure (consistent format, clear categorization), currency (updated when protocols change, timestamped), and accessibility (easy for a prospect or evaluator to find and verify).

    The gap between layer one and layer two is smaller than it sounds. If you’ve already done the internal documentation work in KnowHow, the editorial work of curating an external-facing version of that knowledge is incremental. You’re not building from scratch — you’re deciding what to show and building the window to show it through.

    The Credential That No Certificate Can Replace

    Certifications are static. An IICRC certification tells a client you passed a test. It doesn’t tell them what your company actually does when a technician encounters a Category 3 water loss in a 1960s commercial building with asbestos-containing materials in the subfloor.

    External knowledge does. It shows the specific, documented, currently-maintained thinking your company applies to that situation. It’s living proof of operational maturity, not a snapshot from the last time someone studied for an exam.

    In the commercial restoration market, where the jobs are large, the documentation requirements are significant, and the clients are sophisticated, that distinction is worth money. The companies that build this layer now — while most competitors are still treating knowledge as purely internal — will have a credential that can’t be quickly replicated.

    The Practical Starting Point

    You don’t need a full API to start. The minimum viable version of an external knowledge layer is a structured, well-maintained “Our Methodology” section on your website — not a generic “our process” marketing page, but actual documented protocols organized by job type, with clear version dates and enough specificity that an evaluator can see you’ve actually done the work.

    From there, the path to a structured API is incremental: add consistent categorization, ensure each protocol document has a permanent URL, and eventually expose that structure through a queryable endpoint. Each step makes the credential more verifiable and more valuable.

    KnowHow got the industry to take internal knowledge seriously. The companies that figure out how to take the next step — making that knowledge externally verifiable and machine-readable — will have something the market has never seen before in restoration.

    What is the difference between internal and external knowledge in restoration?

    Internal knowledge (what KnowHow manages) is operational documentation accessible to your own team — SOPs, onboarding materials, process guides. External knowledge is a curated version of that same expertise published in a structured, verifiable form for clients, contracting authorities, and AI systems to access and evaluate.

    Why would a restoration company publish its knowledge externally?

    Because commercial clients, FEMA, insurance carriers, and institutional property managers need to verify operational maturity before awarding contracts. A structured, current, machine-readable knowledge base is a stronger credential than certifications or capabilities statements — it shows documented, maintained expertise rather than a static snapshot.

    What is an external knowledge API for a restoration company?

    A structured, authenticated feed of your documented protocols, methodology, and standards — published in a format that clients, evaluators, and AI systems can query directly. It turns your operational knowledge into a verifiable, market-facing credential rather than keeping it purely internal.

    Who specifically benefits from a restoration company’s external knowledge API?

    Commercial facility managers building approved vendor programs, FEMA and government contracting officers evaluating organizational capability, insurance carriers and TPAs using AI tools to route claims to preferred vendors, and institutional property owners who need auditable vendor documentation standards.

    Does a restoration company need KnowHow to build an external knowledge API?

    No — any internal knowledge platform or even rigorous in-house documentation works as the foundation. KnowHow accelerates the internal capture work, which makes the external publication step more realistic. But the two-layer stack works with any internal knowledge infrastructure that produces well-documented, current, organized protocols.

  • Claude Managed Agents Pricing: $0.25/Session-Hour — Full 2026 Cost Breakdown

    Claude Managed Agents Pricing: $0.25/Session-Hour — Full 2026 Cost Breakdown

    Updated May 2026

    Pricing updated to reflect current Opus 4.7 launch ($5/$25 per MTok) and the retirement of Claude Sonnet 4 and Opus 4 on April 20, 2026. Managed Agents moved to public beta — see the complete pricing guide for current rate details.

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    $0.08 Per Session Hour: Is Claude Managed Agents Actually Cheap?

    Claude Managed Agents Pricing: $0.08 per session-hour of active runtime (measured in milliseconds, billed only while the agent is actively running) plus standard Anthropic API token costs. Idle time — while waiting for input or tool confirmations — does not count toward runtime billing.

    When Anthropic launched Claude Managed Agents on April 9, 2026, the pricing structure was clean and simple: standard token costs plus $0.08 per session-hour. That’s the entire formula.

    Whether $0.08/session-hour is cheap, expensive, or irrelevant depends entirely on what you’re comparing it to and how you model your workloads. Let’s work through the actual math.

    What You’re Paying For

    The session-hour charge covers the managed infrastructure — the sandboxed execution environment, state management, checkpointing, tool orchestration, and error recovery that Anthropic provides. You’re not paying for a virtual machine that sits running whether or not your agent is active. Runtime is measured to the millisecond and accrues only while the session’s status is running.

    This is a meaningful distinction. An agent that’s waiting for a user to respond, waiting for a tool confirmation, or sitting idle between tasks does not accumulate runtime charges during those gaps. You pay for active execution time, not wall-clock time.

    The token costs — what you pay for the model’s input and output — are separate and follow Anthropic’s standard API pricing. For most Claude models, input tokens run roughly $3 per million and output tokens roughly $15 per million, though current pricing is available at platform.claude.com/docs/en/about-claude/pricing.

    Modeling Real Workloads

    The clearest way to evaluate the $0.08/session-hour cost is to model specific workloads.

    A research and summary agent that runs once per day, takes 30 minutes of active execution, and processes moderate token volumes: runtime cost is roughly $0.04/day ($1.20/month). Token costs depend on document size and frequency — likely $5-20/month for typical knowledge work. Total cost is in the range of $6-21/month.

    A batch content pipeline running several times weekly, with 2-hour active sessions processing multiple documents: runtime is $0.16/session, roughly $2-3/month. Token costs for content generation are more substantial — a 15-article batch with research could run $15-40 in tokens. Total: $17-43/month per pipeline run frequency.

    A continuous monitoring agent checking systems and data sources throughout the business day: if the agent is actively running 4 hours/day, that’s $0.32/day, $9.60/month in runtime alone. Token costs for monitoring-style queries are typically low. Total: $15-25/month.

    An agent running 24/7 — continuously active — costs $0.08 × 24 = $1.92/day, or roughly $58/month in runtime. That number sounds significant until you compare it to what 24/7 human monitoring or processing would cost.

    The Comparison That Actually Matters

    The runtime cost is almost never the relevant comparison. The relevant comparison is: what does the agent replace, and what does that replacement cost?

    If an agent handles work that would otherwise require two hours of an employee’s time per day — research compilation, report drafting, data processing, monitoring and alerting — the calculation isn’t “$58/month runtime versus zero.” It’s “$58/month runtime plus token costs versus the fully-loaded cost of two hours of labor daily.”

    At a fully-loaded cost of $30/hour for an entry-level knowledge worker, two hours/day is $1,500/month. An agent handling the same work at $50-100/month in total AI costs is a 15-30x cost difference before accounting for the agent’s availability advantages (24/7, no PTO, instant scale).

    The math inverts entirely for edge cases where agents are less efficient than humans — tasks requiring judgment, relationship context, or creative direction. Those aren’t good agent candidates regardless of cost.

    Where the Pricing Gets Complicated

    Token costs dominate runtime costs for most workloads. A two-hour agent session running intensive language tasks could easily generate $20-50 in token costs while only generating $0.16 in runtime charges. Teams optimizing AI agent costs should spend most of their attention on token efficiency — prompt engineering, context window management, model selection — rather than on the session-hour rate.

    For very high-volume, long-running workloads — continuous agents processing large document sets at scale — the economics may eventually favor building custom infrastructure over managed hosting. But that threshold is well above what most teams will encounter until they’re running AI agents as a core part of their production infrastructure at significant scale.

    The honest summary: $0.08/session-hour is not a meaningful cost for most workloads. It becomes material only when you’re running many parallel, long-duration sessions continuously. For the overwhelming majority of business use cases, token efficiency is the variable that matters, and the infrastructure cost is noise.

    How This Compares to Building Your Own

    The alternative to paying $0.08/session-hour is building and operating your own agent infrastructure. That means engineering time (months, initially), ongoing maintenance, cloud compute costs for your own execution environment, and the operational overhead of managing the system.

    For teams that haven’t built this yet, the managed pricing is almost certainly cheaper than the build cost for the first year — even accounting for the runtime premium. The crossover point where self-managed becomes cheaper depends on engineering cost assumptions and workload volume, but for most teams it’s well beyond where they’re operating today.

    Frequently Asked Questions

    Is idle time charged in Claude Managed Agents?

    No. Runtime billing only accrues when the session status is actively running. Time spent waiting for user input, tool confirmations, or between tasks does not count toward the $0.08/session-hour charge.

    What is the total cost of running a Claude Managed Agent for a typical business task?

    For moderate workloads — research agents, content pipelines, daily summary tasks — total costs typically range from $10-50/month combining runtime and token costs. Heavy, continuous agents could run $50-150/month depending on token volume.

    Are token costs or runtime costs more important to optimize for Claude Managed Agents?

    Token costs dominate for most workloads. A two-hour active session generates $0.16 in runtime charges but potentially $20-50 in token costs depending on workload intensity. Token efficiency is where most cost optimization effort should focus.

    At what point does building your own agent infrastructure become cheaper than Claude Managed Agents?

    The crossover depends on engineering cost assumptions and workload volume. For most teams, managed is cheaper than self-built through the first year. Very high-volume, continuously-running workloads at scale may eventually favor custom infrastructure.


    Related: Complete Pricing Reference — every variable in one place. Complete FAQ Hub — every question answered.

    What to do next

    Now that you have the cost — here’s how to choose and implement

    You know the session-hour rate. The harder decision is whether Managed Agents is the right architecture vs. building on the raw API — or vs. OpenAI’s equivalent.

  • AI Agents Explained: What They Are, Who’s Using Them, and Why Your Business Will Need One

    AI Agents Explained: What They Are, Who’s Using Them, and Why Your Business Will Need One

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    AI Agents Explained: What They Are, Who’s Using Them, and Why Your Business Will Need One

    What Is an AI Agent? An AI agent is a software program powered by a large language model that can take actions — not just answer questions. It reads files, sends messages, runs code, browses the web, and completes multi-step tasks on its own, without a human directing every move.

    Most people’s mental model of AI is a chat interface. You type a question, you get an answer. That’s useful, but it’s also the least powerful version of what AI can do in a business context.

    The version that’s reshaping how companies operate isn’t a chatbot. It’s an agent — a system that can actually do things. And with Anthropic’s April 2026 launch of Claude Managed Agents, the barrier to deploying those systems for real business work dropped significantly.

    What Makes an Agent Different From a Chatbot

    A chatbot responds. An agent acts.

    When you ask a chatbot to summarize last quarter’s sales report, it tells you how to do it, or summarizes text you paste in. When you give the same task to an agent, it goes and gets the report, reads it, identifies the key numbers, formats a summary, and sends it to whoever asked — all without you supervising each step.

    The difference sounds subtle but has large practical implications. An agent can be assigned work the same way you’d assign work to a person. It can work on tasks in the background while you do other things. It can handle repetitive processes that would otherwise require sustained human attention.

    The examples from the Claude Managed Agents launch make this concrete:

    Asana built AI Teammates — agents that participate in project management workflows the same way a human team member would. They pick up tasks. They draft deliverables. They work within the project structure that already exists.

    Rakuten deployed agents across sales, marketing, HR, and finance that accept assignments through Slack and return completed work — spreadsheets, slide decks, reports — directly to the person who asked.

    Notion’s implementation lets knowledge workers generate presentations and build internal websites while engineers ship code, all with agents handling parallel tasks in the background.

    None of those are hypothetical. They’re production deployments that went live within a week of the platform becoming available.

    What Business Processes Are Actually Good Candidates for Agents

    Not every business task is suited for an AI agent. The best candidates share a few characteristics: they’re repetitive, they involve working with information across multiple sources, and they don’t require judgment calls that need human accountability.

    Strong candidates include research and summarization tasks that currently require someone to pull data from multiple places and compile it. Drafting and formatting work — proposals, reports, presentations — that follows a consistent structure. Monitoring tasks that require checking systems or data sources on a schedule and flagging anomalies. Customer-facing support workflows for common, well-defined questions. Data processing pipelines that transform information from one format to another on a recurring basis.

    Weak candidates include tasks that require relationship context, ethical judgment, or creative direction that isn’t already well-defined. Agents execute well-specified work; they don’t substitute for strategic thinking.

    Why the Timing of This Launch Matters for Small and Mid-Size Businesses

    Until recently, deploying a production AI agent required either a technical team capable of building significant custom infrastructure, or an enterprise software contract with a vendor that had built it for you. That meant AI agents were effectively inaccessible to businesses without large technology budgets or dedicated engineering resources.

    Anthropic’s managed platform changes that equation. The infrastructure layer — the part that required months of engineering work — is now provided. A small business or a non-technical operations team can define what they need an agent to do and deploy it without building a custom backend.

    The pricing reflects this broader accessibility: $0.08 per session-hour of active runtime, plus standard token costs. For agents handling moderate workloads — a few hours of active operation per day — the runtime cost is a small fraction of what equivalent human time would cost for the same work.

    What to Actually Do With This Information

    The most useful framing for any business owner or operations leader isn’t “what is an AI agent?” It’s “what work am I currently paying humans to do that is well-specified enough for an agent to handle?”

    Start with processes that meet these criteria: they happen on a regular schedule, they involve pulling information from defined sources, they produce a consistent output format, and they don’t require judgment calls that have significant consequences if wrong. Those are your first agent candidates.

    The companies that will have a structural advantage in two to three years aren’t the ones that understood AI earliest. They’re the ones that systematically identified which parts of their operations could be handled by agents — and deployed them while competitors were still treating AI as a productivity experiment.

    Frequently Asked Questions

    What is an AI agent in simple terms?

    An AI agent is a program that can take actions — not just answer questions. It can read files, send messages, browse the web, and complete multi-step tasks on its own, working in the background the same way you’d assign work to an employee.

    What’s the difference between an AI chatbot and an AI agent?

    A chatbot responds to questions. An agent executes tasks. A chatbot tells you how to summarize a report; an agent retrieves the report, summarizes it, and sends it to whoever needs it — without you directing each step.

    What kinds of business tasks are best suited for AI agents?

    Repetitive, well-defined tasks that involve pulling information from multiple sources and producing consistent outputs: research summaries, report drafting, data processing, support workflows, and monitoring tasks are strong candidates. Tasks requiring significant judgment, relationship context, or creative direction are weaker candidates.

    How much does it cost to deploy an AI agent for a small business?

    Using Claude Managed Agents, costs are standard Anthropic API token rates plus $0.08 per session-hour of active runtime. An agent running a few hours per day for routine tasks might cost a few dollars per month in runtime — a fraction of the equivalent human labor cost.


    Related: Complete Pricing Reference — every variable in one place. Complete FAQ Hub — every question answered.

  • Claude Managed Agents vs. Rolling Your Own: The Real Infrastructure Build Cost

    Claude Managed Agents vs. Rolling Your Own: The Real Infrastructure Build Cost

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    Claude Managed Agents vs. Rolling Your Own: The Real Infrastructure Build Cost

    The Build-vs-Buy Question: Claude Managed Agents offers hosted AI agent infrastructure at $0.08/session-hour plus token costs. Rolling your own means engineering sandboxed execution, state management, checkpointing, credential handling, and error recovery yourself — typically months of work before a single production agent runs.

    Every developer team that wants to ship a production AI agent faces the same decision point: build your own infrastructure or use a managed platform. Anthropic’s April 2026 launch of Claude Managed Agents made that decision significantly harder to default your way through.

    This isn’t a “managed is always better” argument. There are legitimate reasons to build your own. But the build cost needs to be reckoned with honestly — and most teams underestimate it substantially.

    What You Actually Have to Build From Scratch

    The minimum viable production agent infrastructure requires solving several distinct problems, none of which are trivial.

    Sandboxed execution: Your agent needs to run code in an isolated environment that can’t access systems it isn’t supposed to touch. Building this correctly — with proper isolation, resource limits, and cleanup — is a non-trivial systems engineering problem. Cloud providers offer primitives (Cloud Run, Lambda, ECS), but wiring them into an agent execution model takes real work.

    Session state and context management: An agent working on a multi-step task needs to maintain context across tool calls, handle context window limits gracefully, and not drop state when something goes wrong. Building reliable state management that works at production scale typically takes several engineering iterations to get right.

    Checkpointing: If your agent crashes at step 11 of a 15-step job, what happens? Without checkpointing, the answer is “start over.” Building checkpointing means serializing agent state at meaningful intervals, storing it durably, and writing recovery logic that knows how to resume cleanly. This is one of the harder infrastructure problems in agent systems, and most teams don’t build it until they’ve lost work in production.

    Credential management: Your agent will need to authenticate with external services — APIs, databases, internal tools. Managing those credentials securely, rotating them, and scoping them properly to each agent’s permissions surface is an ongoing operational concern, not a one-time setup.

    Tool orchestration: When Claude calls a tool, something has to handle the routing, execute the tool, handle errors, and return results in the right format. This orchestration layer seems simple until you’re debugging why tool call 7 of 12 is failing silently on certain inputs.

    Observability: In production, you need to know what your agents are doing, why they’re doing it, and when they fail. Building logging, tracing, and alerting for an agent system from scratch is a non-trivial DevOps investment.

    Anthropic’s stated estimate is that shipping production agent infrastructure takes months. That tracks with what we’ve seen in practice. It’s not months of full-time work for a large team — but it’s months of the kind of careful, iterative infrastructure engineering that blocks product work while it’s happening.

    What Claude Managed Agents Provides

    Claude Managed Agents handles all of the above at the platform level. Developers define the agent’s task, tools, and guardrails. The platform handles sandboxed execution, state management, checkpointing, credential scoping, tool orchestration, and error recovery.

    The official API documentation lives at platform.claude.com/docs/en/managed-agents/overview. Agents can be deployed via the Claude console, Claude Code CLI, or the new agents CLI. The platform supports file reading, command execution, web browsing, and code execution as built-in tool capabilities.

    Anthropic describes the speed advantage as 10x — from months to weeks. Based on the infrastructure checklist above, that’s believable for teams starting from zero.

    The Honest Case for Rolling Your Own

    There are real reasons to build your own agent infrastructure, and they shouldn’t be dismissed.

    Deep customization: If your agent architecture has requirements that don’t fit the Managed Agents execution model — unusual tool types, proprietary orchestration patterns, specific latency constraints — you may need to own the infrastructure to get the behavior you need.

    Cost at scale: The $0.08/session-hour pricing is reasonable for moderate workloads. At very high scale — thousands of concurrent sessions running for hours — the runtime cost becomes a significant line item. Teams with high-volume workloads may find that the infrastructure engineering investment pays back faster than they expect.

    Vendor dependency: Running your agents on Anthropic’s managed platform means your production infrastructure depends on Anthropic’s uptime, their pricing decisions, and their roadmap. Teams with strict availability requirements or long-term cost predictability needs have legitimate reasons to prefer owning the stack.

    Compliance and data residency: Some regulated industries require that agent execution happen within specific geographic regions or within infrastructure that the company directly controls. Managed cloud platforms may not satisfy those requirements.

    Existing investment: If your team has already built production agent infrastructure — as many teams have over the past two years — migrating to Managed Agents requires re-architecting working systems. The migration overhead is real, and “it works” is a strong argument for staying put.

    The Decision Framework

    The practical question isn’t “is managed better than custom?” It’s “what does my team’s specific situation call for?”

    Teams that haven’t shipped a production agent yet and don’t have unusual requirements should strongly consider starting with Managed Agents. The infrastructure problems it solves are real, the time savings are significant, and the $0.08/hour cost is unlikely to be the deciding factor at early scale.

    Teams with existing agent infrastructure, high-volume workloads, or specific compliance requirements should evaluate carefully rather than defaulting to migration. The right answer depends heavily on what “working” looks like for your specific system.

    Teams building on Claude Code specifically should note that Managed Agents integrates directly with the Claude Code CLI and supports custom subagent definitions — which means the tooling is designed to fit developer workflows rather than requiring a separate management interface.

    Frequently Asked Questions

    How long does it take to build production AI agent infrastructure from scratch?

    Anthropic estimates months for a full production-grade implementation covering sandboxed execution, checkpointing, state management, credential handling, and observability. The actual time depends heavily on team experience and specific requirements.

    What does Claude Managed Agents handle that developers would otherwise build themselves?

    Sandboxed code execution, persistent session state, checkpointing, scoped permissions, tool orchestration, context management, and error recovery — the full infrastructure layer underneath agent logic.

    At what scale does it make sense to build your own agent infrastructure vs. using Claude Managed Agents?

    There’s no universal threshold, but the $0.08/session-hour pricing becomes a significant cost factor at thousands of concurrent long-running sessions. Teams should model their expected workload volume before assuming managed is cheaper than custom at scale.

    Can Claude Managed Agents work with Claude Code?

    Yes. Managed Agents integrates with the Claude Code CLI and supports custom subagent definitions, making it compatible with developer-native workflows.


    Related: Complete Pricing Reference — every variable in one place. Complete FAQ Hub — every question answered.