Category: Agency Playbook

How we build, scale, and run a digital marketing agency. Behind the scenes, systems, processes.

  • Notion Client Portal Setup for Agencies: How We Build Ours

    Most agency client portals are either too complicated to maintain or too bare to be useful. A shared Google Drive folder isn’t a portal. A ClickUp guest view requires the client to learn ClickUp. A custom-built portal requires a developer. Notion sits in the middle — flexible enough to build something professional, simple enough that clients can actually use it without training.

    This is how we build Notion client portals for our own operation. Not a template walkthrough — a description of the actual architecture, what we include, what we leave out, and why.

    What is a Notion client portal? A Notion client portal is a shared Notion page or workspace section that gives a client controlled visibility into their project — deliverables, timelines, assets, and communication — without exposing the rest of your internal operation. It functions as a lightweight client-facing dashboard built inside your existing Notion workspace.

    What a Notion Client Portal Actually Needs to Do

    Before building anything, it helps to be clear about what the portal is for. In our operation, a client portal has three jobs:

    Reduce inbound questions. If a client can see where their project stands without emailing, they will. A well-structured portal cuts “what’s the status?” messages significantly.

    Create a delivery record. Every deliverable — article, report, strategy doc — has a logged home. When a client asks what was delivered in March, the answer is one click away.

    Protect internal operations. The portal is a window, not a door. Clients see what’s relevant to them. They don’t see your internal task database, your pricing notes, your other clients, or your operational SOPs.

    The Core Portal Structure

    Every client portal we build follows the same structural template, customized by scope. The core components are:

    Project Status Dashboard

    A simple table or board view showing the current state of all active deliverables. Columns: deliverable name, status (In Progress / Review / Delivered), due date, and a link to the asset. Clients can see at a glance what’s moving and what’s done without needing to ask.

    This view is a filtered view of our internal Content Pipeline database — the client sees only their rows, not the full database. We use Notion’s filter-by-property feature to scope the view to their entity tag. They get a live view of their work without any access to the broader pipeline.

    Deliverables Library

    A running archive of everything completed and delivered. Articles, audits, reports, strategy documents — each as a linked page or embedded file. Organized by month. This solves the “can you resend that?” problem permanently and gives clients a sense of the body of work accumulating over a retainer.

    Communication Log

    A simple chronological page where significant decisions, feedback rounds, and strategic pivots get logged. Not a chat — a record. When a client says “I thought we decided X,” the communication log is the answer. This protects both parties and reduces scope creep from memory drift.

    Reference Documents

    Brand guidelines, target keyword lists, approved personas, style notes — anything the client has provided or that governs the work. Stored here so the answer to “do we have their brand guide?” is always yes.

    Next Steps

    A short, always-current list of what happens next. Three to five items max. What we’re working on, what we need from them, and when they can expect the next delivery. Clients check this more than anything else in the portal.

    How Access and Permissions Work

    Notion’s sharing model for client portals works at the page level, not the database level. This is the key architectural decision that determines how isolated the portal actually is.

    The correct approach: build the client portal as a standalone page that is not a child of your main Command Center. Share that page with the client via email invite at the “Can view” or “Can comment” level. The portal contains only filtered views and manually duplicated content — never direct database access.

    What to avoid: sharing a database directly with a client, even with filters applied. Notion’s permissions model allows determined users to remove filters from shared database views, exposing rows you didn’t intend to share. Always use a standalone page with embedded filtered views, not a raw database share.

    The Air-Gap Principle

    We call our approach to client portals “air-gapped” — the portal is architecturally separated from the internal operation even though it draws from the same underlying data.

    In practice, this means the portal page never has a back-link to the Command Center. The filtered views are set up so the client can see their data but cannot navigate to the parent database. Any document shared in the portal is either a shared Notion page with its own permissions or an exported file — never a raw internal page with full internal linking.

    The air gap matters because Notion’s page graph is navigable. If you share a page that contains a link to an internal page the client shouldn’t see, they can follow that link if it’s not properly permissioned. Build the portal as if it’s a separate product, even if it isn’t.

    What Not to Put in a Client Portal

    Equally important as what to include: what to leave out.

    Internal task notes. Your notes about why something is late, what went wrong, or what you think about the brief belong in your internal system, not in a client-visible page.

    Pricing and contract details. These live in your Revenue Pipeline and are shared via PDF or dedicated document — not embedded in an operational portal.

    Other clients’ work. Obvious, but worth stating explicitly given how easy it is to accidentally link across projects in a shared workspace.

    Unfinished deliverables. The portal is a delivery mechanism, not a work-in-progress view. Drafts go into the portal when they’re ready for client review, not before.

    Maintaining Portals at Scale

    The main friction with Notion client portals at scale is maintenance overhead. If you’re running ten or more active clients, keeping ten portals current manually is a real time cost.

    The solution is to minimize what requires manual updating. The Project Status Dashboard and Deliverables Library should pull from your internal pipeline database via filtered views — when you update the internal record, the portal updates automatically. The only things requiring manual attention are the Communication Log and Next Steps, which genuinely need a human decision about what to write.

    In our operation, portal maintenance takes roughly five minutes per client per week — the time it takes to update Next Steps and log any significant decisions from that week’s work. Everything else is live from the internal system.

    When Notion Portals Work Well and When They Don’t

    Notion client portals work well for content agencies, SEO operations, strategy consultants, and any service business where the deliverables are primarily documents. The portal model fits naturally when what you’re delivering is readable, linkable, and accumulates over time.

    They work less well for project-heavy engagements where the client needs to interact with tasks, leave comments on specific items, or participate in the workflow. For those cases, a purpose-built client portal tool — or a dedicated shared Notion workspace rather than a view-only portal — is a better fit. Notion can support collaborative client workspaces, but it requires a different architecture than the air-gapped portal model described here.

    Want this built for your agency?

    We set up Notion client portals and full Command Center architectures for agencies — configured for your operation, not a template to customize yourself.

    Tygart Media runs this system live across multiple active clients. We know what the build process looks like and what breaks without proper architecture.

    See what we build →

    Frequently Asked Questions

    Can clients edit content in a Notion client portal?

    Yes, if you give them “Can edit” or “Can comment” permissions. For most agency relationships, “Can comment” is the right level — clients can leave feedback directly on pages without being able to accidentally delete or restructure content. “Can view” works for portals that are purely informational delivery mechanisms.

    Is it safe to share a Notion database view with a client?

    With caution. Filtered database views can have their filters removed by users with edit access. For client-facing portals, use standalone pages with embedded filtered views set to view-only, rather than sharing the database itself. This is the air-gap approach — the client sees the data but cannot access the underlying database structure.

    How do you handle multiple clients in one Notion workspace?

    Each client gets their own portal page, shared individually. Internally, all client data lives in shared databases partitioned by an entity or client tag. Filtered views in each portal show only that client’s records. Clients never see each other’s portals or data because each portal is a separately permissioned page.

    What’s the difference between a Notion client portal and a shared Notion workspace?

    A client portal is a view-only or comment-only window into your operation — the client sees deliverables and status but doesn’t work inside Notion alongside you. A shared workspace is a collaborative environment where both agency and client actively use Notion together. Portals are simpler to maintain and better for most agency relationships. Shared workspaces make sense for longer-term, higher-touch engagements where the client is an active participant in the work.

    How long does it take to set up a Notion client portal?

    A well-structured portal takes two to four hours to build from scratch for the first client. Once you have a working template, duplicating and customizing it for additional clients takes thirty to sixty minutes. The time investment is in designing the architecture correctly the first time — portals built without a clear structure tend to get abandoned within a few months.

  • How I Run 27 Client Sites from One Notion Command Center

    I run 27 client WordPress sites from a single Notion workspace. No project management software, no agency platform, no dedicated CRM. Just Notion — architected deliberately across six interconnected databases — handling task triage, content pipelines, client relationships, revenue tracking, and the knowledge infrastructure that feeds an AI-native content operation.

    This is not a productivity tutorial. This is a description of a real system, built over two years, that runs across seven distinct business entities simultaneously. If you’re an agency owner, solo operator, or content business trying to figure out how to use Notion for something more serious than a to-do list, this is what the other end of that road looks like.

    What is a Notion Command Center? A Notion Command Center is a multi-database workspace architecture that functions as a single operating system for a business or portfolio of businesses. Rather than using Notion as a note-taking app, a Command Center connects tasks, clients, content, and knowledge into a unified system with defined workflows, priority rules, and daily operating rhythms.

    Why Notion Instead of Dedicated Agency Software

    The honest answer: I tried the alternatives. ClickUp has more native project management features. Asana handles task dependencies better out of the box. Monday.com is more polished for client-facing views.

    None of them let me build exactly the system my operation requires. And at the scale I’m running — 27 client sites, seven business entities, a live AI publishing pipeline — the ability to customize the architecture matters more than any individual feature.

    Notion also has a meaningful advantage that most people underestimate: it integrates with Claude natively. My entire operation runs on Claude as the AI layer, and a Notion workspace structured correctly becomes something Claude can read, reason about, and act on. That combination — Notion as the OS, Claude as the intelligence — is what makes this a genuinely AI-native operation rather than just an AI-assisted one.

    The 6-Database Architecture

    The Command Center runs on six core databases. Everything else in the workspace is either a view of these databases, a child page underneath them, or a standalone reference document. The six databases are:

    1. Master Actions

    Every task across all seven entities lives here. Priority levels run P1 (revenue or reputation at risk today) through P4 (delegate or kill). Each task carries an Entity tag, a Status, a Due Date, and a linked record in whichever other database it belongs to — a client, a content piece, a deal.

    The daily operating rule: never more than five tasks marked “Next Up” across the entire workspace at once. If your Next Up list has eight items, something is mislabeled. P1 means the thing doesn’t get done and real consequences follow today.

    2. Content Pipeline

    Every article across all 27 client sites flows through this database before it hits WordPress. Status stages run from Brief → Draft → Optimized → Scheduled → Published. The database links to the client entity, carries the target keyword, the target site URL, word count, and a publication date.

    Nothing publishes without a Notion record. This is a hard rule established after the alternative — articles written in sessions and pushed directly — created audit gaps that took hours to resolve. Notion first, WordPress second.

    3. Revenue Pipeline

    Client deals, proposals, and retainer renewals. Stage-based (Lead → Qualified → Proposal Sent → Active → Renewal). Links to the Master CRM for contact records. The weekly review checks whether any deal has sat in the same stage for more than seven days without activity — that’s a warning sign that gets flagged.

    4. Master CRM

    Every contact across all seven entities. Clients, prospects, golf league members, partners, vendors. Tagged by entity, relationship type, and last contact date. The weekly review catches anyone who should have heard from me and didn’t.

    5. Knowledge Lab

    SOPs, architecture decisions, session logs, and reference documents. This is where the institutional knowledge lives — the things that would take hours to reconstruct if I had to start from scratch. The Knowledge Lab uses a metadata standard (I call it claude_delta) that makes every page machine-readable, so Claude can fetch and reason about the content in a live session without losing context.

    6. William’s HQ

    The daily dashboard. A filtered view of P1 and P2 tasks due today or overdue, the content queue for the next 48 hours, and the inbox triage. This is the page that opens first every morning. Everything else in the system is accessed from here.

    The Seven Entity Structure

    The system manages seven distinct business entities, each with its own Focus Room — a sub-page containing that entity’s active projects, open tasks filtered by entity tag, and key reference documents. The entities are:

    • The parent agency — managing all client sites and retainer relationships
    • Personal brand — direct services, thought leadership, and new business
    • Client A — content operation for a contractor in a regional market
    • Client B — content operation for a service business in a metro market
    • Industry network — B2B community and event operation
    • Content property — topical authority site in a specific vertical
    • Personal — finances, health commitments, personal projects

    The entity structure means a task logged under “a regional client content operation” never bleeds into the the parent agency content queue. The databases are shared, but the entity tag acts as a partition. This matters operationally when you’re switching contexts fifteen times a day — the system tells you where you are and what belongs there.

    The Daily Operating Rhythm

    The Command Center only works if you use it on a rhythm. Mine runs on three loops:

    Morning Triage (10–15 minutes)

    Open William’s HQ. Zero the inbox — every untagged item gets a priority, a status, and an entity. Read the P1 and P2 list. Mentally commit to the top three. Check the content queue for anything publishing in the next 48 hours that isn’t scheduled. That’s a P1 fix before anything else happens.

    End-of-Day Close (5 minutes)

    Mark done tasks complete. Push anything untouched but intended — update the due date or reprioritize down. Check the content queue for tomorrow’s publications. If anything new was created during the day — a contact, a content piece, a deal — verify it’s logged in the right database with the right entity tag.

    Weekly Review (30 minutes, Sunday evening)

    Revenue: any deal stuck in the same stage as last week? Content: next week’s queue fully populated? Tasks: archive all Done tasks older than 14 days. Relationships: anyone who should have heard from me and didn’t? System health: any automation that failed silently?

    The weekly review is the repair mechanism. It catches the things the daily rhythm misses and resets the system before the next week compounds the drift.

    How Claude Plugs Into This

    The Knowledge Lab’s claude_delta metadata standard is what makes the Notion–Claude integration functional rather than theoretical. Every page in the Knowledge Lab carries a JSON metadata block at the top that tells Claude the page type, status, summary, key entities, and a resume instruction for picking up work in progress.

    In practice, this means I can start a session by telling Claude to read a specific Knowledge Lab page, and Claude has enough structured context to continue from exactly where the last session ended — without me re-explaining the project, the client, the constraints, or the decisions already made. The Notion workspace functions as persistent memory across Claude sessions.

    This is the part of the architecture that most people haven’t built yet. Notion as a note-taking app is one thing. Notion as a structured knowledge layer that an AI can navigate and act on is a meaningfully different proposition — and it’s the direction serious operators are moving.

    What This Architecture Costs to Build

    Honest answer: the architecture itself took about three months of active iteration to stabilize. The first version had too many databases, unclear relationships between them, and no real operating rhythm to enforce the discipline. The current version is the result of tearing down and rebuilding twice.

    The tooling cost is low. Notion’s Plus plan at $10/month per member handles everything described here. The BigQuery knowledge ledger that backs the AI memory layer runs on Google Cloud at effectively zero cost at this scale. Claude API usage for content operations runs roughly $50–150/month depending on session volume.

    What actually costs something is the setup time and the learning curve of building databases that relate to each other correctly. Most Notion setups fail not because the tool is limited but because the architecture wasn’t designed before the databases were created.

    Whether This Is Right for Your Agency

    The Command Center architecture works well for solo operators and small agencies managing multiple clients or business lines simultaneously. It works especially well when you’re running an AI-native content operation and need Notion to function as more than task management.

    It’s not the right choice if you need strong native time-tracking, Gantt charts, or client-facing portals that look polished without customization. Those cases have better-suited tools.

    But if you’re running a content agency, a multi-client SEO operation, or any business where the work is primarily knowledge work — briefs, articles, strategies, SOPs, client communications — and you want one system that sees all of it, the 6-database Command Center architecture is worth the build time.

    Want this built for your operation?

    We set up Notion Command Centers for agencies and operators — the full architecture, configured and documented, not a template to figure out yourself.

    Tygart Media has built and runs this system live across 27 client sites. We know what the setup process actually looks like.

    See what we build →

    Frequently Asked Questions

    How many databases does a Notion Command Center need?

    A functional Command Center for an agency or multi-client operation typically needs six core databases: a task database, a content pipeline, a revenue pipeline, a CRM, a knowledge base, and a daily dashboard. More than eight databases usually indicates an architecture problem — complexity that should be handled with views and filters, not additional databases.

    Can Notion handle 27 client sites without getting slow?

    Yes, with proper architecture. The key is using filtered views rather than separate databases for each client, and keeping database page counts manageable by archiving completed records regularly. Notion’s performance degrades when a single database exceeds a few thousand active records — archive aggressively and it stays fast.

    How does Notion integrate with Claude AI?

    Notion and Claude integrate through structured page formatting and the Notion API. By standardizing metadata at the top of key pages — page type, status, summary, key entities — Claude can fetch and interpret Notion content in a live session. More advanced setups use the Notion API to read and write records programmatically during Claude sessions, effectively making Notion the persistent memory layer for AI operations.

    What’s the difference between a Notion Command Center and a regular Notion workspace?

    A regular Notion workspace is typically organized around document types — pages, notes, tasks — without enforced relationships between them. A Command Center is organized around business operations — entities, pipelines, and workflows — with databases that relate to each other and a defined operating rhythm that governs how the system gets used each day.

    How long does it take to set up a Notion Command Center?

    Building the architecture from scratch takes 20–40 hours of focused setup time, including database design, relationship configuration, view creation, and SOP documentation. Most operators who attempt it solo take 2–3 months of iteration before the system stabilizes. Working from an existing architecture and having it configured for your specific operation compresses that significantly.

    Is Notion good for content agencies specifically?

    Notion is well-suited for content agencies because the core work — briefs, drafts, SOPs, client communication, publishing schedules — is document-centric. The Content Pipeline database, linked to a CRM and task system, gives visibility into every piece of content across every client at once, which is difficult to replicate in project management tools not built for document-heavy workflows.

  • The Distillery: Hand-Crafted Batches of Distilled Knowledge, Available as API Feeds

    Most content on the internet is noise. It exists to rank, to fill space, to signal presence. It is not dense enough to be useful to the people who actually need to know the thing it claims to cover. And it is certainly not dense enough to be valuable as a feed that an AI system pulls from to answer real questions.

    The Distillery is different. It is a named section of Tygart Media where we produce small batches of genuinely high-density knowledge on specific topics — researched from real search demand data, written to a standard where every sentence earns its place, and published in structured form that both humans and AI systems can use.

    Each batch is available as a category API feed. Subscribers get authenticated access to the full batch as structured JSON — updated as new knowledge is added, versioned so auditors and AI systems can cite the exact vintage they’re drawing from.

    What a Batch Is

    A batch is a curated body of knowledge on a specific topic, built from three ingredients: real demand data (what people are actually searching for and what advertisers are paying to reach), primary research (direct engagement with the subject matter, not summarizing what others have written), and editorial discipline (the $5 filter — would someone pay $5 a month to pipe this feed into their AI? if not, it doesn’t ship).

    Each batch has a name, a number, and a version. Batch 001 is the Restoration Carbon Protocol — the only published Scope 3 emissions calculation standard for property restoration work. Batch 005 is the Restoration Industry Knowledge Base — a structured body of operational knowledge for restoration contractors who want to build AI-native systems without starting from scratch.

    Batches are not blog posts. They are not opinion columns. They are not rephrased Wikipedia entries. They are the kind of specific, accurate, hard-earned knowledge that takes real work to produce and that AI systems actively need but largely cannot find in their training data.

    How the API Works

    Every Distillery batch is accessible through the Tygart Content Network API. Subscribers receive an API key at signup. The key unlocks authenticated access to the batch endpoints they’ve subscribed to. Each endpoint returns structured JSON — articles by category, filterable by date and topic, with consistent metadata that AI agents can process directly.

    The response format is designed for machine consumption: clean plain text content, explicit categorization, publication timestamps for recency evaluation, and topic tags that allow agents to assess relevance before processing. The same feed that powers a human reader’s understanding of a topic powers an AI agent’s ability to answer questions about it accurately.

    Rate limits are generous at the $5 community tier — 100 requests per day, sufficient for an AI assistant pulling daily updates. Professional tiers at $50/month offer higher limits, webhook push when new content publishes, and bulk historical pulls for training and fine-tuning use cases.

    Why Information Density Is the Moat

    The content that survives in an AI-mediated information environment is the content that contains something worth extracting. Not something that sounds authoritative — something that actually is. The difference is information density: the ratio of useful, specific, actionable knowledge to total words published.

    Every Distillery batch is held to the same standard: if an AI system pulled from this feed to answer a question in this domain, would the answer be more accurate and more specific than if the AI had relied on its training data alone? If yes, the batch has value. If no, we haven’t done enough work yet.

    This standard is harder to meet than it sounds. It eliminates most of what gets published under the banner of “thought leadership” and “content marketing.” It requires knowing the subject well enough to say things that couldn’t be said by someone who spent an afternoon with a search engine. It is the reason The Distillery produces small batches rather than high volumes.

    Current Batches

    Batch 001 — Restoration Carbon Protocol (RCP)
    The only published Scope 3 ESG emissions calculation standard for property restoration work. Covers all five core restoration job types with actual emission factor tables, complete worked examples, and the 12-point data capture standard. Designed for restoration contractors serving commercial clients with 2027 SB 253 Scope 3 reporting obligations. 23 articles. Updated monthly.

    Batch 002 — The Knowledge Economy API Layer
    The conceptual and practical framework for turning human expertise into machine-consumable, API-distributable knowledge products. For anyone with domain expertise considering how to package and monetize it in an AI-native information environment. 8 articles. Updated as the landscape develops.

    Batch 003 — Mason County Minute
    Current, structured, consistently maintained coverage of Mason County, Washington — local government, business, community, real estate, and public affairs. The only machine-readable hyperlocal intelligence feed for this geography. Updated weekly.

    Batch 004 — Belfair Bugle
    Hyperlocal coverage of Belfair, WA and the North Mason community. Current events, local government, community intelligence. The only structured feed for this geography. Updated weekly.

    Batch 005 — Restoration Industry Knowledge Base (coming)
    Operational knowledge infrastructure for restoration contractors — the 50 knowledge nodes every restoration company should have documented, the AI-native knowledge architecture that replaces manual training, and the integration patterns connecting job management systems to knowledge delivery. In development.

    Batch 006 — AI Agency Playbook (coming)
    The operating methodology behind Tygart Media — how a single operator runs 27+ client sites, deploys AI-native content at scale, and builds knowledge infrastructure rather than content volume. For agency owners and solo operators building AI-native practices. In development.

    Who This Is For

    The Distillery API is for three kinds of subscribers:

    Developers building AI tools who need reliable, current, domain-specific knowledge feeds to ground their applications in accurate information. The Restoration Carbon Protocol feed, for example, gives any AI assistant building tool accurate restoration-specific ESG data without the developer having to research and curate it themselves.

    Businesses who want AI systems that actually know their industry. A restoration company whose AI assistant draws from the RCP feed knows more about Scope 3 emissions calculation for their job types than any general-purpose AI. A commercial property manager whose AI assistant pulls from the RCP feed can answer contractor ESG questions accurately instead of hallucinating plausible-sounding nonsense.

    Content teams and agencies who want structured, current, reliable source material for their own content production — not to copy, but to ensure accuracy and specificity in their coverage of these domains.

    The Standard We Hold Ourselves To

    Every article in every batch passes one test before it ships: would someone pay $5 a month to pipe this feed into their AI? Not to read it themselves — to have their AI draw from it continuously as a trusted source in this domain.

    If the answer is no — if the content is too generic, too thin, or too derivative to justify a subscription — it doesn’t ship. The batch waits until the knowledge is actually there.

    This makes The Distillery slow. It makes it small. And it makes it worth subscribing to.

  • RCP Proxy Estimation Guide: How to Calculate When Primary Data Is Missing

    The RCP requires 12 data points per job. In practice, some of those data points will be unavailable — particularly for historical jobs being calculated retrospectively, or for field situations where documentation wasn’t captured as completely as the standard requires. The proxy estimation methodology provides documented substitution methods that produce defensible, auditor-acceptable estimates when primary data is missing.

    Key principle: A documented estimate with a stated assumption is always preferable to a blank field in an RCP report. ESG auditors understand that emissions calculation involves uncertainty — what they require is transparency about where estimation was used and what the basis of that estimation was. Undocumented guesses are not acceptable. Documented proxies are.

    Data Quality Tiers

    The RCP uses three data quality tiers, consistent with GHG Protocol Scope 3 guidance:

    Tier Description Audit Acceptability
    Tier 1 — Primary measured data Actual measurements from job records: GPS mileage, disposal facility receipts with weights, materials purchase orders by job Highest — preferred for all data points
    Tier 2 — Primary estimated data Calculated from documented job parameters using RCP proxy methods: affected area × consumption rate, crew size × duration × unit rate Acceptable — must document calculation method and basis
    Tier 3 — Spend-based / invoice-based proxy Dollar amount × industry average emission factor — the fallback of last resort Lowest — use only when no job-specific data is available; flag prominently in data quality notes

    Proxy Methods by Data Point

    Data Point 1 — Vehicle Mileage (Transportation)

    Primary source: GPS fleet tracking data, dispatch records, driver logs.

    Proxy method: Use Google Maps or equivalent mapping tool to calculate round-trip distance from your facility (or prior job address for multi-stop days) to the job site. Multiply by the number of crew trips documented in time records or invoices. This is a Tier 2 estimate.

    Default proxy (Tier 3, last resort): Industry average mobilization distance for restoration contractors is 22 miles one-way (44 miles round trip). Apply this default only when no address or routing information is available. Note as Tier 3 estimate in data quality section.

    Data Point 2 — Waste Transport Mileage

    Primary source: Waste manifests and hauler receipts (these typically include origin and destination).

    Proxy method: Use the distance from the job site to the nearest licensed disposal facility of the appropriate type (standard C&D landfill, licensed ACM facility, medical waste facility). Use online waste facility directories (EPA RCRA Info for hazmat, state environmental agency databases for C&D landfills) to identify the nearest appropriate facility.

    Default proxies by facility type (Tier 3): Standard C&D landfill: 18 miles. Licensed ACM facility: 60 miles. Licensed PCB incineration: 150 miles. Medical waste facility: 55 miles.

    Data Point 3 — Equipment Power Source

    Primary source: Job documentation noting whether equipment ran on building power or contractor generator; generator fuel logs.

    Proxy method: Default assumption is building electrical supply unless your company policy or the job type (remote location, building power unavailable) indicates otherwise. Note the assumption explicitly. If generator use is suspected but not documented, use the following generator fuel proxy: standard drying equipment setup (3 dehumidifiers + 6 air movers) consuming approximately 2.5 gallons of diesel per 8-hour shift × number of drying days × 10.21 kg CO2e per gallon diesel.

    Data Points 4–5 — Chemical Treatments and PPE Consumption

    Application rate proxies by job type and surface type:

    Job Type / Surface Antimicrobial Rate Tyvek Suits per Tech per Day Glove Pairs per Tech per Day N95/P100 per Tech per Day
    Cat 1 water — porous surfaces 0.008 L/sq ft 0.5 2 0.5
    Cat 2 water — porous surfaces 0.015 L/sq ft 1.0 3 1.0
    Cat 3 water — porous surfaces 0.025 L/sq ft (×2 applications) 2.0 5 2.0
    Mold Condition 3 — first application 0.020 L/sq ft 2.0 4 1.5
    Mold Condition 3 — second application 0.015 L/sq ft 2.0 4 1.5
    Fire — smoke cleaning (chemical sponge + cleaner) 1 sponge per 50 sq ft + 0.010 L/sq ft cleaner 1.5 4 1.5
    Hazmat abatement (Level C, standard exit protocol) N/A (wetting agent: 0.003 L/sq ft ACM) 3.0 (full replacement each exit) 6 2 pairs OV/P100
    Biohazard Level C 0.025 L/sq ft × 2 applications 3.0 (full replacement each exit) 6 2 pairs OV/P100
    Biohazard Level B (decomposition) 0.025 L/sq ft × 2 applications 3.0 Level B full-suit (replace each exit) 6 Supplied air — 0 disposable

    Data Point 6 — Containment Materials

    Proxy method: Standard containment for a single affected room (standard ceiling height 8–10 ft): perimeter of affected area (linear feet) × ceiling height × 1.2 (overlap factor) = m² of poly sheeting. For compartmentalized commercial spaces, add 20 m² per additional doorway or penetration point.

    Zipper doors: 1 per entry/exit point, typically 2 per contained area (entry + equipment pass-through).

    Data Points 7–8 — Waste Volume and Disposal

    Volume proxy: Use weight estimation proxies from the RCP Emission Factor Reference Table (drywall at 2.5 lbs/sq ft, carpet at 3.0 lbs/sq ft, etc.) applied to the demolished area documented in job scope records.

    Disposal method proxy: If disposal facility type is unknown, apply default based on material type: standard C&D for non-contaminated demolition debris, regulated C&D or hazmat for contaminated materials (see Table 3 in the Emission Factor Reference).

    Data Points 9–10 — Demolished and Installed Materials

    Proxy method: Calculate from demolition scope records (affected area by room, material type documented in scope of work or Xactimate/Symbility estimate). Weight estimation proxies apply as above. For installed materials in reconstruction phase, use square footage from scope-of-work documentation and apply standard weight proxies.

    Documenting Proxy Use in Your RCP Report

    Every proxy estimate must be documented in the data quality section of the per-job carbon report. The format for documenting a proxy is: [Data point name]: [Tier 2 or 3 estimate]. [Brief description of proxy method]. [Source of proxy rate or assumption].

    Example: “Vehicle mileage: Tier 2 estimate. Round-trip distance calculated using Google Maps from company facility to job site address (44 miles RT × 4 crew trips). Crew trip count from job invoices. Source: RCP proxy method P-4-1.”

    Example: “PPE consumption: Tier 2 estimate. Cat 3 water damage standard consumption rate applied (2.0 Tyvek/tech/day, 5 glove pairs/tech/day) per RCP Table A-5. Actual PPE not tracked separately on this job.”

    Can a per-job carbon report with all Tier 2 estimates be used in GRESB reporting?

    Yes. GRESB accepts primary data at various quality levels, including documented estimates. A Tier 2 estimate is primary data (not spend-based estimation) and is acceptable. The data quality notation in the RCP report demonstrates that you have applied documented methodology rather than guessing, which is what auditors need to see.

    What is the margin of error typical for Tier 2 proxy estimates?

    Typical uncertainty range for Tier 2 RCP estimates is ±20–35% relative to primary measured data. This compares favorably to spend-based estimation (Tier 3), which typically has ±50–100% uncertainty for restoration work due to the high variability of job type, scope, and emission profile at equivalent invoice amounts.

    Should you disclose the uncertainty range in the per-job carbon report?

    The RCP does not require quantified uncertainty ranges in the per-job report, but noting that Tier 2 estimates were used in the data quality section effectively communicates to auditors that the figure carries inherent estimation uncertainty. For clients whose ESG consultants or auditors specifically request uncertainty ranges, use the guidance values above (±20–35% for Tier 2).


  • RCP Emission Factor Reference Table: All Values in One Place

    This reference table consolidates all emission factors used in Restoration Carbon Protocol calculations. It is the lookup document you use when completing a per-job carbon report — every factor needed for Categories 1, 4, 5, and 12 across all five job types is in this table, with source citations for audit purposes.

    Version: RCP v1.0 | Factor vintage: EPA 2024, DEFRA 2024, EPA WARM v16 | Units: All values in kg CO2e unless noted as tCO2e

    Table 1: Category 4 — Vehicle Transportation

    Vehicle Type Fuel kg CO2e per mile Source
    Passenger car Gasoline 0.355 EPA Table 2, Mobile Combustion 2024
    Light-duty truck / work van (under 8,500 lbs GVWR) Gasoline 0.503 EPA Table 2, Mobile Combustion 2024
    Light-duty truck / cargo van Diesel 0.523 EPA Table 2, Mobile Combustion 2024
    Medium-duty truck / equipment trailer (8,500–26,000 lbs GVWR) Diesel 1.084 EPA Table 2, Mobile Combustion 2024
    Heavy-duty truck — unloaded (26,000+ lbs GVWR) Diesel 1.612 EPA Table 2, Mobile Combustion 2024
    Heavy-duty truck — loaded (waste hauling, C&D) Diesel 2.25 EPA Table 2 + load factor adjustment
    Licensed hazmat waste hauler (ACM, lead, general hazmat) Diesel 3.20 EPA Table 2 + hazmat vehicle premium
    Licensed hazmat hauler (PCB, high-hazard specialty) Diesel 3.80 EPA Table 2 + specialty vehicle premium
    Medical waste hauler (biohazard) Diesel 2.80 EPA Table 2 + medical waste vehicle
    Pack-out truck (contents restoration) — loaded Diesel 2.25 EPA Table 2 + load factor
    Pack-out truck — empty (return trip) Diesel 1.612 EPA Table 2 — unloaded heavy

    Table 2: Category 1 — Materials

    Chemical Treatments

    Material Unit kg CO2e per unit Source
    Quaternary ammonium antimicrobial / biocide (liquid) Liter 2.8 EPA EEIO — Chemical manufacturing sector
    Hydrogen peroxide-based antimicrobial/biocide Liter 1.9 EPA EEIO — Chemical manufacturing sector
    Borax-based mold treatment kg 1.1 EPA EEIO — Inorganic chemical manufacturing
    Hospital-grade disinfectant (EPA-registered) Liter 2.8 EPA EEIO — Chemical manufacturing sector
    Enzyme biological digester / deodorizer Liter 1.6 EPA EEIO — Specialty chemical manufacturing
    Encapsulant / smoke-blocking primer Gallon 4.2 EPA EEIO — Paint and coatings manufacturing
    Thermal fogging agent Liter 2.1 EPA EEIO — Chemical manufacturing sector
    Desiccant drying agent (silica gel) kg 1.4 EPA EEIO — Chemical manufacturing sector
    Wetting agent / amended water (surfactant for ACM) Liter 1.4 EPA EEIO — Chemical manufacturing sector
    Dry ice (CO2 pellets for blast cleaning) kg 0.85 EPA EEIO — Industrial gas manufacturing

    Personal Protective Equipment

    PPE Item Unit kg CO2e per unit Source
    Disposable Tyvek suit (Level C) Each 1.2 EPA EEIO — Apparel manufacturing
    Level B full encapsulating suit Each 3.0 EPA EEIO — Apparel/specialty manufacturing
    Level C PPE full kit (Tyvek + gloves + goggles + boot covers) Kit 1.8 Composite of individual items
    Level B PPE full kit (encapsulating suit + supplied air + gloves) Kit 4.2 Composite of individual items
    Nitrile gloves (pair) Pair 0.3 EPA EEIO — Rubber and plastics manufacturing
    N95 respirator (disposable) Each 0.4 EPA EEIO — Medical equipment manufacturing
    Half-face respirator, P100 cartridges (pair) Pair 0.8 EPA EEIO — Medical equipment manufacturing
    Full-face respirator cartridges (pair) Pair 1.2 EPA EEIO — Medical equipment manufacturing
    Boot covers (pair) Pair 0.15 EPA EEIO — Rubber and plastics

    Containment and Filtration

    Material Unit kg CO2e per unit Source
    6-mil polyethylene sheeting 0.55 EPA EEIO — Plastics product manufacturing
    4-mil polyethylene sheeting 0.37 EPA EEIO — Plastics product manufacturing
    Double-layer 6-mil containment (hazmat/biohazard) 1.10 2× single-layer factor
    Zipper door — disposable Each 1.8 EPA EEIO — Plastics/hardware
    Zipper door — reusable (amortized over 20 uses) Use 0.09 1.8 ÷ 20 uses
    HEPA filter — air scrubber (standard) Each 3.2 EPA EEIO — Industrial machinery manufacturing
    HEPA vacuum bag (commercial grade) Each 0.4 EPA EEIO — Paper/plastics manufacturing
    Biohazard bag — 33-gallon red (medical waste) Each 0.65 EPA EEIO — Medical plastics manufacturing
    ACM disposal bag — 6-mil labeled (33-gallon) Each 0.55 EPA EEIO — Plastics product manufacturing
    Sharps disposal container (1-gallon) Each 0.35 EPA EEIO — Plastics/medical equipment
    Glove bag (pipe insulation removal) Each 0.85 EPA EEIO — Plastics product manufacturing

    Table 3: Category 5 — Waste Disposal

    Waste Type Disposal Method tCO2e per ton Source
    Standard C&D debris (non-hazardous mixed) Landfill 0.16 EPA WARM v16
    Cat 2 water-contaminated porous materials Standard landfill 0.18 EPA WARM + contamination premium
    Cat 3 sewage-contaminated materials Regulated C&D landfill 0.22 EPA WARM + regulated disposal
    Smoke-contaminated C&D debris (standard) Standard landfill 0.16 EPA WARM v16
    Smoke-contaminated C&D (regulated facility) Licensed C&D landfill 0.20 EPA WARM + transport premium
    Mold-contaminated porous materials Standard landfill (most jurisdictions) 0.18 EPA WARM + contamination premium
    Friable ACM (pipe insulation, spray fireproofing) Licensed hazmat landfill 0.42 EPA WARM + licensed facility + transport
    Non-friable ACM (floor tiles, roofing, joint compound) Licensed C&D with ACM cell 0.28 EPA WARM + regulated C&D transport
    Lead paint debris (TCLP-classified hazardous) Licensed hazmat landfill 0.38 EPA WARM + hazmat transport
    PCB-containing materials ≥50 ppm Licensed PCB incineration 1.85 EPA hazardous waste incineration factors
    PCB-containing materials <50 ppm Licensed landfill 0.22 EPA WARM + transport premium
    Mercury-containing lamps/thermostats Mercury recycler 0.15 EPA WARM — recycling credit offset
    Regulated medical/biohazard waste (standard) Autoclave + licensed landfill 0.55 EPA medical waste treatment factors
    High-pathogen biohazard waste High-temperature incineration 0.85 EPA hazardous waste incineration factors
    Sharps waste Sharps autoclave or incineration 0.65 EPA medical waste — sharps category
    Contaminated water (Cat 3, to wastewater treatment) Municipal wastewater treatment 0.000272 per liter EPA WARM v16 — wastewater treatment
    Disposable PPE — standard Standard landfill 0.25 EPA WARM — mixed plastics
    Disposable PPE — hazmat-contaminated Licensed hazmat or medical waste landfill 0.30–0.55 Apply appropriate hazmat or medical waste factor

    Table 4: Category 12 — Demolished Building Materials

    Material tCO2e per ton (landfill) tCO2e per ton (recycled) Source
    Gypsum drywall (1/2″) 0.16 0.02 EPA WARM v16
    Dimensional lumber / wood framing -0.07 -0.15 EPA WARM v16 — carbon storage credit
    OSB sheathing -0.05 -0.12 EPA WARM v16 — carbon storage credit
    Carpet + pad (standard residential/commercial) 0.33 0.05 EPA WARM v16
    Hardwood flooring -0.12 -0.18 EPA WARM v16 — carbon storage credit
    Vinyl / LVP flooring 0.28 0.08 EPA WARM v16 — plastics category
    Ceramic / porcelain tile 0.04 0.01 EPA WARM v16 — inert material
    Fiberglass batt insulation 0.33 0.05 EPA WARM v16
    Cellulose insulation (spray or loose-fill) 0.06 -0.02 EPA WARM v16
    Spray polyurethane foam insulation (SPF) 0.72 N/A EPA WARM v16 — plastics category
    Acoustic ceiling tiles (standard) 0.12 0.03 EPA WARM v16 — ceiling tile category
    Structural steel (demolished) -0.85 -0.95 EPA WARM v16 — steel recycling credit
    Copper pipe / wiring -0.45 -0.60 EPA WARM v16 — copper recycling credit
    Aluminum (ductwork, framing) -1.20 -1.45 EPA WARM v16 — aluminum recycling credit (high value)

    Weight Estimation Proxies

    When disposal receipts are not available, use these weight proxies to estimate demolished material tonnage:

    Material Weight per sq ft (installed, dry) Notes
    1/2″ gypsum drywall 2.5 lbs Use dry weight, not post-water-damage wet weight
    5/8″ gypsum drywall (Type X) 3.1 lbs Common in commercial construction
    Carpet + pad (residential) 3.0 lbs Including pad and tack strips
    Carpet + pad (commercial, glue-down) 2.2 lbs Heavier carpet, no pad
    LVP / vinyl plank flooring 2.8 lbs Including underlayment
    Ceramic tile (floor, 3/8″) 4.5 lbs Including thin-set mortar
    Acoustic ceiling tiles (2’×2′ standard) 1.8 lbs Mineral fiber type
    Fiberglass batt insulation (3.5″ R-13) 0.5 lbs Per sq ft of coverage area
    Dimensional lumber 2×4 wall framing (per linear foot of wall) 4.0 lbs Assumes 16″ OC framing in 8-ft walls
    Non-friable ACM floor tile (9″×9″) 4.0 lbs Including mastic adhesive

    How often will this reference table be updated?

    The RCP emission factor reference table will be updated annually following the release of updated EPA WARM, EPA Mobile Combustion, and DEFRA databases. Version numbers are included in the table header — always cite the version used in your per-job carbon report data quality notes.

    What if I need an emission factor for a material not in this table?

    First check EPA WARM v16 directly (available free at epa.gov/warm). Second, check the EPA EEIO database for the relevant industry sector. Third, check DEFRA’s Conversion Factors for Company Reporting. If none of these sources contain the specific material, use the closest proxy category and document the substitution in your data quality notes.

    Are these factors suitable for use in EU CSRD reporting?

    EPA and EPA WARM factors are US-specific but are accepted in most international ESG frameworks when accompanied by clear source citation. For EU CSRD reporting specifically, DEFRA factors (UK) or OECD emission factors may be preferred by auditors for non-US operations. The RCP will publish a DEFRA-specific factor table in a future supplement for EU-applicable reporting contexts.


    Table 6: Refrigerant GWP Values — IPCC AR6 Update

    The Global Warming Potential values for refrigerants used in restoration drying equipment have been updated under IPCC Sixth Assessment Report (AR6, 2021). AR6 GWP-100 values are 14–18% higher than AR5 for the HFCs commonly found in LGR dehumidifiers. RCP v1.0 uses AR6 values for refrigerant-related calculations. The EPA AIM Act continues to use AR4 values for regulatory compliance; UNFCCC/Paris reporting uses AR5. When delivering data to clients, disclose which GWP vintage was used.

    Refrigerant Common use in restoration AR5 GWP-100 AR6 GWP-100 Change
    R-410A (HFC-32/125 blend) Most current LGR dehumidifiers ~1,924 ~2,256 +17.3%
    R-32 (HFC-32) Dri-Eaz LGR 6000i; newer units 677 771 +13.9%
    R-454B (HFC-32/HFO-1234yf blend) Next-gen low-GWP units ~467 ~530 +13.5%
    HFC-134a (R-134a) Older residential dehumidifiers 1,300 1,530 +17.7%

    Source: IPCC AR6 WG1, Chapter 7, Table 7.SM.7 (2021). EPA Technology Transitions GWP Reference Table.


    Table 7: EPA eGRID 2023 — Subregional Emission Factors for Major Restoration Markets

    The national average grid factor (0.3497 kg CO₂e/kWh, eGRID 2023) used as the RCP default understates or overstates electricity emissions significantly depending on where equipment is operated. Using location-specific subregion factors improves data quality for clients in GRESB, SBTi, and CSRD reporting contexts.

    Use the subregion factor for the state/metro where the job was performed, not where the contractor’s facility is located.

    eGRID Subregion Primary coverage kg CO₂e/kWh vs. RCP default (0.3499)
    NYUP Upstate New York 0.1101 -68.5%
    CAMX California / Western US 0.1950 -44.3%
    NEWE New England 0.2464 -29.6%
    ERCT Texas (ERCOT) 0.3341 -4.5%
    US Average National default (RCP v1.0) 0.3497 Baseline
    FRCC Florida 0.3560 +1.7%
    SRSO Southeast (excluding FL) 0.3837 +9.7%
    NYCW NYC and Westchester 0.3927 +12.2%

    Source: EPA eGRID2023 Summary Tables Rev 2 (published March 2025). Full subregion table available at epa.gov/egrid. A California restoration contractor using the national average overstates electricity emissions by 44%; a Florida contractor understates by 1.7%. The difference is largest for multi-week jobs with sustained equipment energy consumption.


    Table 8: PPE and Consumables — LCA-Sourced Per-Unit Emission Factors

    The EPA EEIO proxies in Table 2 are sector-level estimates. The following values are sourced from published lifecycle assessments and Environmental Product Declarations for specific product types. Use these in place of the EEIO values where the product type matches.

    Item Unit kg CO₂e Source vs. EEIO proxy
    Nitrile glove (3.5g, size M) Each 0.0277 Top Glove LCA 2024, SATRA-verified -82% vs. EEIO pair proxy
    Nitrile glove pair Pair 0.0554 Top Glove LCA 2024 -82% vs. current 0.3 EEIO
    N95 respirator (disposable) Each 0.05 Springer Env. Chem. Letters 2022 -88% vs. current 0.4 EEIO
    DuPont Tyvek 400 coverall (180g HDPE) Each 0.40–0.63 Estimated: 180g × 2.2–3.5 kg CO₂e/kg HDPE -47–65% vs. current 1.2 EEIO
    LVP/LVT flooring (Shaw EcoWorx) 5.2 Shaw Contract EcoWorx Resilient EPD 2023 Consistent with WARM v16 plastics
    Ceramic tile (standard) kg 0.78 ICE Database v3.0 (University of Bath) More granular than WARM v16 inert
    Ready-mix concrete (30 MPa) kg 0.13 ICE Database v3.0 132 kg CO₂e/m³
    Polyethylene LDPE sheeting kg 1.793 DEFRA 2024 (closed-loop recycling scenario) Use as proxy for virgin LDPE sheeting
    H₂O₂ antimicrobial (active ingredient) kg active 1.33 ACS Omega 2025 (anthraquinone process) Lower than EEIO chemical proxy

    Note on Tyvek: DuPont has not published an independent lifecycle assessment for standard Tyvek 400 coveralls. The value above is estimated from HDPE production emission factors. DuPont has commissioned an LCA for Tyvek 500 Xpert BioCircle (a recycled-content variant) claiming 58% reduction versus standard Tyvek, which implies a quantified baseline exists internally. The RCP will update this value if DuPont publishes the underlying LCA data.

    Note on nylon carpet (DEFRA 2024): The DEFRA 2024 value of 5.40 kg CO₂e/kg for nylon carpet should be verified against the actual DEFRA 2024 full spreadsheet to confirm whether this represents virgin nylon production or a closed-loop recycling scenario. DEFRA 2024 uses AR5 GWP values throughout.


    Factor Vintage and GWP Basis: Version Disclosure

    RCP v1.0 uses the following factor vintages:

    • Electricity: EPA eGRID 2023 (published March 2025)
    • Mobile combustion / vehicle fuels: EPA 2025 Emission Factors Hub
    • Waste disposal: EPA WARM v16
    • Refrigerant GWPs: IPCC AR6 (2021)
    • Materials (non-EEIO): ICE Database v3.0, EPD-sourced, DEFRA 2024
    • Materials (EEIO proxy): EPA USEEIO v2.0
    • GWP basis: AR6 GWP-100 for refrigerants; AR5 GWP-100 for all other gases (consistent with EPA GHG Inventory basis)

    When factors are updated in patch releases, the factor vintage table updates accordingly. All RCP Job Carbon Reports should reference the schema_version field (RCP-JCR-1.0) which implicitly references the factor table version used at calculation time. For year-over-year comparisons, use the same factor vintage across both years unless a major correction justifies restating prior-year figures.


  • Biohazard and Trauma Scene Cleanup: Scope 3 Emissions Mapping and Calculation Guide

    Biohazard and trauma scene cleanup is the fifth core restoration job type covered under the Restoration Carbon Protocol. Its Scope 3 emissions profile is distinct from the other four categories in one critical way: virtually all waste generated is classified as regulated medical or biohazardous waste, triggering disposal emission factors that are 3–5× higher than standard C&D waste. Combined with intensive PPE requirements and specialized treatment chemicals, biohazard cleanup generates significant emissions from a relatively small affected area.

    Job Classification

    Job Type Primary Waste Classification Dominant Emission Category Typical Range per Scene
    Unattended death / decomposition Regulated medical waste + affected porous materials Cat 5 (biohazard disposal) + Cat 12 (demolished materials) 0.8–3.0 tCO2e
    Trauma scene (blood/bodily fluids, limited area) Regulated medical waste, minimal structure affected Cat 5 dominant 0.3–1.2 tCO2e
    Crime scene with structural damage Regulated medical waste + C&D debris Cat 5 + Cat 12 1.0–4.0 tCO2e
    Sharps/drug paraphernalia scenes Sharps waste (regulated) + affected surfaces Cat 5 (sharps disposal) dominant 0.4–1.5 tCO2e
    Hoarding remediation with biohazard component Mixed solid waste + biohazard materials Cat 4 (volume transport) + Cat 5 1.5–6.0 tCO2e

    Category 4: Transportation

    Vehicle Type kg CO2e per mile Use
    Biohazard response vehicle (dedicated, sealed) 0.503–1.084 Crew and initial materials transport (van or truck)
    Medical waste hauler (regulated) 2.80 Regulated biohazardous waste to licensed medical waste facility
    Dump truck (standard C&D, non-biohazard portion) 2.25 loaded Non-regulated demolition debris for hoarding jobs

    Medical waste facility distance: Licensed medical waste treatment facilities (autoclaves, incinerators) are less common than standard landfills. Average distance from job site to licensed biohazard disposal facility is 40–80 miles in most US markets. Use actual manifest distances; apply 60 miles as default where manifests are unavailable.

    Category 1: Materials

    Material Unit kg CO2e per unit Notes
    Hospital-grade disinfectant (quaternary ammonium, EPA-registered) Liter 2.8 EPA EEIO — chemical manufacturing
    Enzyme treatment / biological digester Liter 1.6 EPA EEIO — specialty chemical
    Ozone generator treatment (odor/pathogen) Day-unit 0.35 Equipment embodied carbon amortized
    Hydroxyl generator treatment Day-unit 0.40 Equipment embodied carbon amortized
    Level B PPE full kit (Tyvek + face shield + supplied air) Kit 4.2 Required for decomposition / unattended death
    Level C PPE kit (Tyvek + half-face P100/OV) Kit 1.8 Trauma scenes with active biohazard
    6-mil poly sheeting (containment + floor protection) 0.55 EPA EEIO — plastics manufacturing
    Biohazard bags (red, 33-gallon) Each 0.65 Medical-grade polyethylene, red-colored
    Sharps disposal container (1-gallon) Each 0.35 EPA EEIO — plastics/medical equipment

    Category 5: Waste — Biohazard Disposal

    Waste Type Disposal Method tCO2e per ton Source
    Regulated medical waste (soft tissue, bodily fluids, porous materials) Autoclave + landfill 0.55 EPA medical waste incineration / autoclave factors
    Regulated medical waste — high pathogen risk High-temperature incineration 0.85 EPA hazardous waste incineration factors
    Sharps waste (needles, glass) Sharps autoclave or incineration 0.65 EPA medical waste — sharps category
    Contaminated porous building materials (drywall, carpet, subfloor) Licensed medical waste landfill or standard landfill (jurisdiction-dependent) 0.38–0.55 Apply higher factor when facility requires medical waste classification
    Non-biohazard C&D debris (hoarding, structural) Standard landfill 0.16 EPA WARM v16 — standard C&D
    Spent PPE (biohazard-contaminated) Licensed medical waste facility 0.55 Same as regulated medical waste stream

    Jurisdiction note on porous material classification: Whether mold-contaminated porous building materials from biohazard scenes must be disposed of as regulated medical waste (vs. standard C&D waste) varies by state and local regulation. Check with your licensed waste hauler for the applicable classification in your jurisdiction. Apply the higher emission factor (0.55) in conservative calculations or when disposal classification is uncertain.

    Category 12: Demolished Building Materials

    Biohazard scenes frequently require demolition of affected porous materials — flooring, subfloor, drywall — that absorbed biological contamination and cannot be cleaned to restoration standards. When these materials are classified as regulated medical waste at removal, their disposal emissions are captured in Category 5 (same as ACM materials in hazmat abatement). When they are classified as standard C&D waste at the jurisdiction level, use Category 12 EPA WARM factors (same as water damage demolition materials).

    Apply Category 12 factors to demolished materials only when they flow to standard C&D landfill rather than medical waste disposal. When in doubt, apply medical waste disposal factors and capture in Category 5.

    Worked Example: Unattended Death, Single Apartment Unit

    Job profile: Unattended death in a 650 sq ft apartment, discovered after 10 days. Affected area: 400 sq ft (bedroom and hallway). Scope: removal of all porous materials in affected area (carpet, subfloor, drywall to 24″ height), disinfection of all surfaces, odor treatment. Duration: 2 days. Crew: 2 technicians in Level B PPE. Facility: 15 miles from job site. Licensed medical waste facility: 58 miles from job site.

    Category 4 — Transportation

    Crew vehicle: 1 van × 30 mi RT × 3 trips = 90 mi × 0.503 = 45 kg
    Medical waste hauler: 1 × 116 mi RT × 2.80 = 325 kg
    Category 4 total: 370 kg = 0.37 tCO2e

    Category 1 — Materials

    Hospital-grade disinfectant (400 sq ft × 0.025 L/sq ft × 2 applications): 20 L × 2.8 = 56 kg
    Enzyme treatment: 8 L × 1.6 = 13 kg
    Ozone generator: 2 day-units × 0.40 = 1 kg
    Level B PPE (2 workers × 2 days × 3 exits/day = 12 kit replacements): 12 × 4.2 = 50 kg
    Biohazard bags (20 bags): 20 × 0.65 = 13 kg
    Poly sheeting (floor protection + containment): 80 m² × 0.55 = 44 kg
    Category 1 total: 177 kg = 0.18 tCO2e

    Category 5 — Waste

    Regulated medical waste (soft materials, porous materials, PPE): estimated 0.6 tons × 0.55 = 0.33 tCO2e
    Non-hazard debris (drywall, not in medical waste stream): 0.25 tons × 0.16 = 0.04 tCO2e
    Category 5 total: 0.37 tCO2e

    Category 12

    Carpet/pad (400 sq ft): 0.55 tons × 0.33 = 0.18 tCO2e
    Subfloor (400 sq ft plywood): 0.40 tons × -0.05 = -0.02 tCO2e
    Category 12 total: 0.16 tCO2e

    Category tCO2e
    Category 4 — Transportation 0.37
    Category 1 — Materials 0.18
    Category 5 — Waste (regulated medical) 0.37
    Category 12 — Demolished materials 0.16
    Total 1.08 tCO2e

    Is biohazard cleanup typically covered by commercial property insurance?

    Yes — biohazard cleanup at commercial properties is typically covered under property insurance. The emissions data from an RCP biohazard calculation should be provided to the commercial property manager for their Scope 3 inventory in the same format as other restoration job types.

    How do you handle hoarding remediation with both biohazard and standard C&D waste streams?

    Split the waste into its classified streams: regulated biohazardous material (apply medical waste disposal factors), standard C&D debris (apply WARM factors), and any hazardous materials encountered (apply hazmat factors). Document each stream separately in the Category 5 breakdown. The mixed nature of hoarding jobs makes them the most complex biohazard calculation scenario.

    Does the RCP apply to crime scenes where law enforcement is involved?

    Yes. The RCP calculation is based on the remediation contractor’s scope of work regardless of the cause of the biohazard condition. The emissions calculation is performed after the scene is released to the contractor and is based on the actual materials used, waste generated, and transportation involved in the cleanup — independent of the legal context of the event.


    Disposal Method Differentiation: Autoclave vs. Incineration Creates a 5–10× Emission Difference

    The biohazard guide currently uses a single disposal factor of 0.88 tCO₂e per short ton for all regulated medical/biohazardous waste. This figure is methodologically sound as a default, but the actual emission factor depends entirely on which treatment pathway your waste hauler uses. The difference is not marginal — it is 5 to 10 times.

    The following lifecycle emission data comes from a peer-reviewed GHG Comparison Assessment conducted by Carbon Action Consultants (2022, reviewed by Dr. Tahsin Choudhury) commissioned by Envetec, covering 72 metric tonnes of biohazardous waste across treatment pathways:

    Treatment Pathway tCO₂e per metric tonne vs. Direct Incineration
    Onsite disinfection and shredding (where permitted) 0.057 93% lower
    Autoclave → standard landfill (no incineration) 0.46 44% lower
    Direct high-temperature incineration → landfill 0.82 Baseline
    Autoclave → incineration → landfill (dual treatment) 0.90 +10% above direct incineration

    Source: Envetec GHG Comparison Assessment, 2022. Validation: UK NHS hospital waste study (Journal of Cleaner Production, 2020) measured high-temperature incineration at 1,074 kg CO₂e per tonne (0.97 tCO₂e/short ton), consistent with the incineration-pathway figure above.

    The current RCP default of 0.88 tCO₂e/short ton (equivalent to approximately 0.97 tCO₂e/metric tonne) reflects the dual-treatment or incineration-dominant pathway. It is a conservative and defensible default. However, for contractors whose waste haulers use autoclave-only treatment, the actual figure may be nearly half the default.

    How to document: Ask your regulated waste hauler which treatment method they use. Record the answer in the data_quality.notes field of your RCP Job Carbon Report. If the hauler uses autoclave-only, apply 0.46 tCO₂e/metric tonne (0.42 tCO₂e/short ton) and flag it as hauler-confirmed primary data. If unknown, apply the default 0.88 tCO₂e/short ton and flag as proxy.


    Autoclave Energy Intensity

    For contractors or facilities operating onsite autoclave treatment, the energy intensity data is available from peer-reviewed hospital operations research. A study published in PubMed (PMID 27075773), tracking 304 days and 2,173 autoclave cycles, measured:

    • Energy intensity: 1.9 kWh per kg of waste sterilized
    • Water consumption: 58 liters per kg of waste

    At the national grid emission factor (0.3499 kg CO₂e/kWh), autoclave treatment of one short ton (907 kg) of biohazardous waste consumes approximately 1,723 kWh of electricity, generating 603 kg CO₂e from energy alone — consistent with the peer-reviewed lifecycle figure of 0.46 tCO₂e/tonne when hauling and residual landfill are included.


    Odor Neutralization Chemistry: What Has Emission Data and What Doesn’t

    Trauma and biohazard cleanup frequently involves odor neutralization as a final step after biological contamination is removed. The emission factors for these chemicals are poorly documented.

    Peracetic acid (PAA) is the best-documented odor treatment and disinfectant in restoration applications. The Envetec lifecycle study assigns 0.61 kg CO₂e per kg of PAA active ingredient, making it one of the lower-footprint chemical treatments available. PAA breaks down rapidly to acetic acid and water — no persistent residue, no downstream emission concerns.

    Chlorine dioxide (ClO₂) is the dominant chemistry for trauma scene odor elimination. Products using sodium chlorite activated with citric acid (Biocide Systems Room Shocker, ProKure1) are self-generating chemistry requiring no electricity for treatment delivery. No published production emission factor exists for ClO₂ generator products specifically. The RCP treats ClO₂ odor treatment as a data gap. Apply the EPA EEIO chemical manufacturing proxy (2.8 kg CO₂e/kg of active chemical) and flag as estimated.

    Enzyme-based neutralizers similarly lack published LCA data. Treat as a data gap and apply the EEIO proxy.


    ATP Testing: Emissions-Negligible but Methodologically Required

    ATP bioluminescence testing (ANSI/IICRC S540 requires minimum two rounds per scene — pre-remediation and clearance) is a consumable source. Hygiena UltraSnap ATP swabs weigh approximately 5–10g each (polypropylene housing, pre-moistened fiber tip, luciferin/luciferase reagent). Estimated carbon footprint: 20–50g CO₂e per swab using generic small medical plastic device lifecycle data. A typical trauma scene requiring 10–30 swabs generates 0.2–1.5 kg CO₂e from ATP testing.

    This is below 0.1% of total job emissions on all but the smallest trauma scene jobs. ATP testing is documented here for methodological completeness — include it in Category 1 if your job tracking captures swab consumption, but it is acceptable to omit and note the exclusion as immaterial in the data_quality section.


    Sources and References — Biohazard Technical Additions

    • Envetec / Carbon Action Consultants. GHG Comparison Assessment for Biohazardous Waste Treatment Pathways. 2022. envetec.com
    • PubMed PMID 27075773. “Steam sterilisation’s energy and water footprint.” Journal of Hospital Infection. 2016.
    • Springer Environmental Chemistry Letters. “Impact of waste of COVID-19 protective equipment on the environment.” 2022.
    • Top Glove. Life Cycle Assessment Results for Nitrile Gloves. SATRA-verified. 2024.
    • ANSI/IICRC S540. Standard for Professional Biohazard Remediation. Current edition.

  • The ESG Case for the Restoration Golf League: A Network That Sets Standards

    The Restoration Golf League was designed as a B2B networking vehicle — a way for independent restoration contractors to build relationships with commercial property managers, insurance adjusters, and facility directors in an environment that creates genuine connection rather than transactional vendor-client dynamics.

    The ESG conversation creates an opportunity to extend what the RGL does — not by adding another agenda item to golf networking events, but by positioning the RGL network as the restoration industry’s first ESG-capable contractor coalition. A group of independent operators who share a commitment to structured emissions reporting and who collectively represent a preferred vendor base for commercial clients with Scope 3 obligations.

    What a Network Does That Individuals Can’t

    An individual restoration contractor who adopts RCP is a data point. A network of 50 RCP-certified restoration contractors across multiple markets is a standard. The distinction matters to commercial property managers who operate nationally — they need consistent data from vendor bases across multiple regions, not ad-hoc reporting from individual contractors who each implement differently.

    When a national REIT’s sustainability team is looking for RCP-compliant restoration vendors in six markets simultaneously, a network of contractors who share a common standard, a common report format, and a common data delivery commitment is a procurement solution, not a patchwork of individual vendor relationships to manage. The RGL becomes a vendor category rather than a collection of individual vendors.

    The RGL ESG Proposition to Commercial Clients

    Straightforward: every RGL member contractor provides RCP-format per-job carbon data. When you hire an RGL contractor, you receive structured Scope 3 emissions data for your GRESB, CDP, and SB 253 disclosures. You don’t need to evaluate each contractor’s ESG capability individually — RGL membership in an RCP-adopting network is the credential. This is a market-facing advantage the RGL can offer today.

    How to Advance RCP Through the RGL Network

    Present the RCP framework at the next RGL event. Invite member contractors to commit to a 60-day RCP implementation pilot. Collect the five pilot jobs required for self-certification from willing members. Then publish the pilot results — aggregate emissions data from the pilot cohort — as the first empirical data set for the restoration industry’s Scope 3 baseline.

    That aggregate baseline — even from a small pilot cohort of 10–20 contractors — would be the first published data on restoration industry Scope 3 emissions. It would immediately become the reference data cited by property managers, ESG consultants, and eventually trade associations trying to understand what restoration work actually emits. First-mover advantage in publishing that data is significant and durable.

    The Longer View

    Commercial real estate’s appetite for ESG-credentialed vendor networks is growing. As SB 253 deadlines approach and GRESB supply chain requirements tighten, property managers will actively seek vendor networks that reduce their ESG data collection burden. A restoration contractor network offering consistent RCP reporting across multiple markets is exactly what large commercial property management companies will pay a premium for — in the form of preferred vendor status, longer contract terms, and the relationship stability that comes from being a supply chain ESG partner rather than a transactional service vendor.

    The RGL’s golf format builds the relationships. RCP adoption builds the credential. Together, they create a network that commercial clients can point to when their investors and auditors ask about supply chain ESG engagement in property restoration.

    Does RGL membership automatically confer RCP certification?

    Not currently. RCP certification requires completing the self-certification checklist, which is separate from RGL membership. The goal is for RCP certification to become a condition of active RGL membership in markets where commercial real estate is a significant client category.

    How can a commercial property manager find RGL member contractors in their market?

    Contact the Restoration Golf League directly. As the network grows and ESG positioning develops, a public directory of RCP-certified RGL members by market will be the most efficient way for commercial clients to identify ESG-capable restoration vendors in their service areas.

    Can restoration contractors outside the RGL adopt RCP?

    Absolutely. RCP is an open standard available to any restoration contractor regardless of RGL membership. The RGL pilot cohort is one pathway to RCP adoption — not a prerequisite for using the framework.


  • RCP and KnowHow: How the Internal and External Knowledge Stacks Work Together

    The restoration industry is developing two parallel knowledge infrastructure plays simultaneously, and they are more complementary than they might appear at first.

    KnowHow — the AI-powered operational knowledge platform — solves the internal problem: capturing what your best people know, making it accessible to every team member, and ensuring institutional knowledge doesn’t walk out the door when someone leaves. It makes your operational playbook consistent, scalable, and resilient to turnover.

    The Restoration Carbon Protocol solves the external problem: structuring your operational data — specifically the emissions data generated by your work — in a format that commercial clients can use in their ESG disclosures. It makes your environmental footprint visible, consistent, and credible to institutional clients who need it for their own reporting obligations.

    Where the Two Stacks Connect

    The connection point is job documentation. KnowHow helps your crew follow consistent protocols — which means the data generated during a job (materials used, waste generated, work performed) is more consistent and reliably captured. That consistency directly benefits RCP data quality. When crews follow a KnowHow-documented protocol for Category 3 water damage mitigation, the resulting data consistency makes the RCP calculation for that job more reliable.

    In the other direction: RCP creates external accountability for the quality of your internal processes. When you’re producing per-job carbon reports for commercial clients that may be reviewed by ESG auditors, the incentive to maintain rigorous job documentation increases. External reporting requirements are one of the most effective drivers of internal data discipline.

    The Two-Layer Architecture

    Layer 1 — Internal (KnowHow): Operational SOPs, job protocols, training materials, quality standards. Purpose: consistent execution, scalable training, knowledge retention. Audience: your team. Knowledge stays inside your organization.

    Layer 2 — External (RCP): Per-job carbon data, client-facing reports, ESG vendor profiles, methodology documentation. Purpose: commercial client ESG compliance, preferred vendor status, market differentiation. Audience: commercial clients, their auditors, government contracting officers. Knowledge flows outward in structured, client-usable form.

    Neither layer replaces the other. A contractor with excellent internal processes (Layer 1) but no external reporting capability (Layer 2) has a good operation that commercial clients can’t verify. A contractor with RCP reporting capability (Layer 2) but inconsistent internal processes (Layer 1) has credibility problems — the external reports may not reflect consistent underlying reality. The competitive position that’s hard to replicate is both layers, built deliberately, operating together.

    Does KnowHow integration with RCP require a technical connection between the platforms?

    Not currently. The integration is conceptual — KnowHow documents the protocols, crews follow them, and resulting data consistency benefits RCP calculations. Future integration could include RCP data capture fields within KnowHow’s job documentation workflows.

    Which should a contractor implement first?

    Either order works. If internal processes are inconsistent, KnowHow first — consistent processes make RCP data more reliable. If processes are consistent but no external reporting capability exists, RCP first — the commercial client relationship benefit is more immediately visible. Both are worth pursuing regardless of order.

    Are there other knowledge platforms comparable to KnowHow?

    General knowledge management platforms (Notion, Confluence, Process Street) can serve the same internal documentation purpose with more configuration effort. The RCP is compatible with any internal knowledge management approach — it’s agnostic to which platform captures and delivers your operational SOPs.


  • How to Become an RCP-Certified Restoration Contractor

    The RCP self-certification program provides a structured pathway for restoration contractors to demonstrate they have implemented the framework — moving from awareness to a verifiable credential that commercial clients can rely on. Self-certification is the appropriate model for an early-stage standard: honest about what the credential represents (contractor attestation, not third-party audit), and creating a meaningful bar that not every contractor will clear.

    The RCP Self-Certification Checklist

    Part 1: Knowledge and Training

    • Company leadership has read and understands the RCP v1.0 framework document
    • At least one employee designated as RCP implementation lead has completed the RCP calculation methodology training
    • The implementation lead can explain the four primary GHG Protocol Scope 3 categories applicable to restoration work and why each is relevant

    Part 2: Data Capture Implementation

    • The company’s job close-out process includes capture of all 12 RCP data points (or documented proxy methods for any that cannot be directly captured)
    • The data capture process has been applied to at least 5 commercial restoration jobs
    • Job records from those 5 jobs are retained and available for calculation purposes

    Part 3: Calculation Capability

    • The company can produce a complete RCP per-job carbon report for each of the 5 pilot jobs, covering all four primary Scope 3 categories
    • The calculation uses RCP-specified emission factors from EPA or DEFRA sources
    • Each report includes a data quality section noting any points where estimation was used

    Part 4: Client Delivery

    • At least one per-job carbon report has been delivered to a commercial client
    • The company has an ESG vendor profile including the five RCP vendor profile components
    • The company’s standard commercial contract can include an RCP data delivery commitment

    The Certification Process

    Complete the checklist, submit it along with five sample redacted per-job carbon reports, and attest that the information is accurate. The RCP program reviews submissions for completeness and consistency — not to audit the underlying data, but to verify that reports are structured correctly and the methodology is applied as specified. Contractors who complete the review process receive the RCP Certified designation and may use the RCP Certified badge in commercial materials and vendor profiles.

    What RCP Certification Signals

    RCP Certified tells a property manager’s ESG team three things: the contractor understands Scope 3 methodology (training completed), they have a functioning data capture system (reports produced for five jobs), and they are committed to ongoing delivery (client delivery process established). For ESG-aware preferred vendor programs, RCP certification reduces due diligence burden — property managers can require it as a qualification criterion and rely on it to indicate capability.

    How long does the certification process take?

    For a contractor starting from scratch, implementing data capture, completing five jobs with RCP tracking, producing reports, and completing the submission typically takes 60–90 days. Contractors who already track detailed job data can move faster.

    Does certification need to be renewed?

    RCP certification will be renewable annually, requiring brief attestation that the contractor is using the current RCP version and has maintained their data capture and delivery process. Annual renewal is a light lift — its purpose is to maintain the quality signal of the credential over time.

    Is there a cost for RCP certification?

    The initial self-certification program will have a nominal administrative fee to cover program management. The framework documentation, training materials, and calculation worksheets remain free regardless of certification status.


  • The Restoration Carbon Protocol FAQ: Every Question We’ve Heard

    Since publishing the Restoration Carbon Protocol framework, we’ve received questions from restoration contractors, commercial property managers, ESG consultants, and insurance professionals. This FAQ consolidates the most common questions and our current best answers.

    Questions from Restoration Contractors

    Does RCP apply to residential restoration work?

    The RCP is designed for commercial restoration contexts — specifically for the Scope 3 reporting needs of commercial property managers. However, the calculation methodology applies to any restoration job regardless of property type. The reporting value is primarily realized in commercial relationships where property managers have ESG disclosure obligations.

    How long does it take to produce an RCP per-job carbon report?

    For a project manager who has captured the 12 RCP data points during the job, producing the per-job carbon report at close-out typically takes 30–60 minutes. The calculation is straightforward — multiplication of activity data by emission factors, category by category. The time investment drops significantly as the process becomes routine.

    What if I don’t have all 12 data points for a completed job?

    Use RCP’s proxy estimation methodology for missing data points. The RCP provides standard consumption rates by job type and damage class that substitute for actual measured data when records are unavailable. Document which data points were estimated and the basis. A documented estimate is far more useful to your client than no data.

    Is there a fee to use the RCP?

    No. The Restoration Carbon Protocol is published open-access. The methodology, calculation worksheets, emission factor tables, and per-job carbon report template are all freely available. The goal is adoption, not revenue from the standard itself.

    Do I need to disclose my company’s own Scope 1 and 2 emissions to use RCP?

    No. RCP produces Scope 3 data for your clients — data about emissions generated by your work on their behalf. This is distinct from your own company’s Scope 1 and 2 emissions. You don’t need your own emissions disclosure program to provide per-job client data under RCP.

    Questions from Commercial Property Managers

    How do I request RCP-format data from my current restoration vendors?

    Start with a conversation. Contact your primary restoration vendors and ask if they’re familiar with the Restoration Carbon Protocol and whether they can provide per-job carbon reports. Share the RCP framework documentation with vendors not yet familiar. For new contracts and renewals, add a sustainability data rider specifying RCP-format delivery within 30–60 days of job completion.

    What do I do with RCP data once I receive it?

    Incorporate the tCO2e figures into your Scope 3 inventory by GHG Protocol category. Category 4 and 5 data goes into your Scope 3 Categories 4 and 5 respectively. Category 1 materials data goes into your Scope 3 Category 1. For GRESB, use the RCP reports as evidence of supply chain engagement in your Management section response. For CDP and SB 253, the data feeds directly into your Scope 3 category disclosures.

    Is RCP data acceptable to third-party ESG auditors?

    RCP data is calculated using GHG Protocol Corporate Value Chain Standard methodology and EPA/DEFRA emission factors — both accepted by major third-party ESG assurance providers. The RCP does not itself provide assurance; it provides the underlying primary data that the auditor assesses. RCP-format data with clear methodology documentation and data quality notes generally satisfies auditor data quality requirements better than spend-based estimates.

    Questions from ESG Consultants

    How does RCP handle the uncertainty inherent in emissions calculations?

    The RCP acknowledges uncertainty in two ways: data quality tiers (primary measured data, primary estimated data with documented methods, proxy-based estimation) and a mandatory data quality notation section in every report. This transparency is consistent with GHG Protocol guidance on Scope 3 data quality and is what auditors expect to see.

    Will the RCP be updated as emission factor databases update?

    Yes. The RCP will publish annual updates to emission factor tables aligned with EPA and DEFRA database release cycles. Version numbers are included in all reports, allowing auditors to identify which emission factor vintage was applied.

    Can RCP coexist with other contractor ESG frameworks?

    Yes. RCP is designed to be complementary to broader contractor ESG programs. A restoration contractor participating in EcoVadis, ISO 14001, or other environmental management frameworks can layer RCP per-job carbon reporting on top — RCP addresses the specific per-job Scope 3 data delivery need that broader frameworks don’t typically address at the job level.


    Carbon Avoidance Questions

    What is the difference between actual emissions and avoided emissions under RCP?

    Actual emissions are what went into the Scope 3 inventory — the quantified carbon from transportation, materials, waste disposal, and demolished building components on a specific job. Avoided emissions are supplementary disclosures documenting what didn’t happen because of a deliberate operational choice: a wall assembly dried in place instead of demolished, debris sent to a recycler instead of a landfill, an electric monitoring van used instead of a diesel truck. Avoided emissions do not reduce the actual emissions total. They are reported alongside it as evidence of reduction activity. The GHG Protocol treats avoided emissions as supplementary information outside the inventory boundary, and RCP follows this treatment.

    Can my client subtract avoided emissions from their Scope 3 total?

    No. Avoided emissions are evidence of reduction progress — they belong in the sustainability narrative and supplier engagement documentation, not in the inventory calculation. A client who subtracts avoided emissions from their Scope 3 total would be misrepresenting their inventory under the GHG Protocol. The correct use is: report the actual Scope 3 figure, then separately document the avoided emissions as evidence that the contractor is actively reducing their supply chain impact.

    Are avoided emissions the same as carbon offsets?

    No. Offsets are purchased credits representing reductions achieved by a third party elsewhere. Avoided emissions are reductions achieved on the specific job being reported, by the contractor doing the work. They are not tradeable, not purchasable, and cannot be used by one party to compensate for another party’s emissions. A contractor cannot sell their avoided emissions credits without going through a formal carbon credit verification process under a recognized standard like Verra or Gold Standard — which is a separate and complex undertaking outside the RCP framework.

    What documentation is required for an avoided emissions claim?

    The same standard as actual emissions: a source document that a third-party verifier can examine. Dry-in-place avoidance requires a psychrometric log confirming the dry standard was achieved and documentation that no demolition was performed. Waste diversion avoidance requires a weight receipt from the recycling facility naming the material type and weight. Equipment substitution avoidance requires the GPS trip log or equipment runtime record showing the actual equipment used. An avoided emissions claim without source documentation is not auditable and should not be delivered to clients facing CSRD or SBTi verification requirements.

    When will avoided emissions be formally part of the RCP schema?

    Avoided emissions are RCP guidance in v1.0 — the methodology and JSON structure are documented but not yet a formal required schema element. The avoided_emissions object is targeted for formalization in RCP v1.1, along with a standardized counterfactual table and a dry-in-place documentation protocol. Contractors generating avoided emissions data now can use the structure described in the RCP Carbon Avoidance Framework article — records generated under this guidance will be compatible with the v1.1 formal schema.