Category: Agency Playbook

How we build, scale, and run a digital marketing agency. Behind the scenes, systems, processes.

  • Notion Second Brain for Business Owners (Not Productivity Nerds)

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    The Notion second brain content online is almost entirely written for individuals. Personal productivity. Getting things out of your head. PARA systems for your reading notes. That’s useful for a person. It’s not what a business owner running an operation actually needs.

    A business second brain is different in kind, not just in scale. It’s not a place to capture your ideas — it’s the institutional memory of an organization. The difference matters for how you build it, what goes in it, and how you use it.

    This is the business owner’s version: no productivity philosophy, no personal capture system, just the architecture that works when the stakes are operational rather than personal.

    What is a Notion second brain for business? A business second brain in Notion is an externalized operational memory system — a structured workspace where the knowledge, decisions, procedures, and context that run a business live outside any individual’s head. Unlike a personal second brain focused on personal knowledge management, a business second brain is organized around operational function: what we do, how we do it, who we work with, and what we’ve decided.

    What a Business Second Brain Actually Stores

    Personal second brains store ideas, highlights, book notes, and learning. Business second brains store different things — and getting clear on the distinction prevents building the wrong system.

    A business second brain stores: how things get done (SOPs and procedures), what has been decided and why (architecture decisions and rationale), who the relevant people are and where relationships stand (CRM and contact history), what is currently in motion (project and content pipelines), and what was learned that should change how things get done next time (session logs and after-action notes).

    It does not store every idea you had, every article you read, or every meeting note verbatim. Those belong in a personal system or in the trash. The business second brain is a curated operational record, not a capture-everything archive.

    The Organizational Principle: Function Over Topic

    Personal second brains are usually organized by topic — a page for marketing, a page for strategy, a page for each project. This makes sense for individual knowledge management. It breaks down for business operations because the same information belongs to multiple topics simultaneously.

    Business second brains are organized by function: what kind of operational question does this answer? The six functional categories that cover most small business operations are tasks, content, revenue, relationships, knowledge, and the daily dashboard. Everything in the business belongs to one of those six. If it doesn’t fit any of them, it probably doesn’t need to be documented.

    The Knowledge Layer Is the Differentiator

    Most business Notion setups have tasks and maybe a content tracker. The part that separates a true second brain from a fancy to-do list is the knowledge layer — the documented institutional memory that makes the operation less dependent on any one person’s recall.

    The knowledge layer contains three things. SOPs: how specific procedures get executed, written precisely enough that someone unfamiliar with the process could follow them correctly. Architecture decisions: why the operation is structured the way it is, including the alternatives that were considered and rejected. Client and project context: the accumulated understanding of each relationship and engagement that would otherwise live only in the account manager’s memory.

    This layer is the hardest to build because it requires translating tacit knowledge — things people just know from experience — into explicit documentation. It’s also the most valuable, because it’s the layer that survives personnel changes, makes onboarding tractable, and allows an AI system to operate on your behalf with real institutional context.

    Daily Use Is What Makes It a Brain

    A second brain that you consult once a week is a reference library. A second brain that you interact with every working day is an operating system. The difference is in how the daily rhythm is designed.

    The daily interaction with the business second brain should take ten to fifteen minutes in the morning: triage new items into the right databases, check what’s due or overdue, scan the content queue for anything publishing in the next 48 hours that needs attention. And five minutes at the end of the day: mark done tasks complete, push anything untouched, log any significant decisions made.

    If those interactions feel like maintenance overhead, the system isn’t designed right. They should feel like reading the dashboard of a machine you trust — a quick orientation to current state before the day’s work begins.

    What Makes It AI-Ready

    The most significant thing a business second brain can do in 2026 that wasn’t possible five years ago is function as context infrastructure for an AI system. When Claude can read your SOPs, understand your active projects, and know what decisions have already been made, it operates as a genuine collaborator rather than a tool you have to re-brief every session.

    Making a Notion workspace AI-ready requires one addition beyond good organization: a consistent metadata structure on key pages that makes them machine-readable. A brief structured summary at the top of each important page — the page type, what it covers, the key constraints, and a resume instruction for continuing work in progress — gives an AI system the orientation it needs without requiring it to read thousands of words of context every session.

    This isn’t complicated to implement. It’s a JSON block at the top of each important page, written once and updated when the page changes. But it’s the difference between a Notion workspace that an AI can navigate and one that requires constant manual context transfer.

    Starting Without Starting Over

    Most business owners who want a Notion second brain already have some Notion — random pages, abandoned systems, half-built databases from previous attempts. The instinct is to start over from scratch. Usually the right move is not to.

    Start by identifying what already exists that’s actually useful: any SOPs that are current, any databases that are being used, any pages that people actually refer to. Move those into the right place in the six-database architecture. Then identify the most important gaps — usually the knowledge layer, which is often entirely missing — and fill those first.

    A usable business second brain built in two weeks by organizing what exists is worth more than a perfect system built from scratch over three months. The system’s value is in being used, not in being complete.

    Want this built for your business?

    We build Notion second brain systems for business owners — the full architecture, configured for your operation, with the knowledge layer that most setups skip.

    Tygart Media runs this system live across multiple business lines. We know what the build process looks like and what makes it stick.

    See what we build →

    Frequently Asked Questions

    Is a business second brain the same as a personal second brain?

    No. A personal second brain is organized around individual knowledge management — capturing ideas, notes, and learning for personal recall and creativity. A business second brain is organized around operational function — tasks, pipelines, relationships, procedures, and institutional knowledge. The tools can overlap (both often use Notion) but the architecture and the content are fundamentally different.

    How is a Notion business second brain different from a project management tool?

    Project management tools handle tasks and timelines. A business second brain handles those plus the knowledge layer — why decisions were made, how procedures work, what the history of a client relationship looks like, what was learned from past projects. The knowledge layer is what transforms a task tracker into something that actually captures and preserves institutional memory.

    Who should own the business second brain?

    In a small agency or solo operation, the owner maintains it. In a slightly larger team, the person closest to operations — often the account lead or operations manager — maintains the shared elements while individuals maintain their own client-specific documentation. The critical rule: someone must own it. A second brain maintained by everyone equally is maintained by no one.

    How long does it take to build a business second brain in Notion?

    A functional minimum viable second brain — the six databases set up, the most critical SOPs documented, the daily rhythm established — takes twenty to thirty hours of focused work. A mature system with comprehensive knowledge documentation takes three to six months of consistent operation. The minimum viable version provides immediate value; the mature version is what makes the operation genuinely resilient and AI-ready.

  • Notion Project Management for Small Agencies: The 6-Database Architecture

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    The project management tools built for agencies assume you have a team. They’re priced per seat, designed for handoffs between people, and optimized for visibility across a group. If you’re running a small agency — two to five people, or solo with contractors — most of that architecture is overhead you don’t need and complexity that actively slows you down.

    Notion solves this differently. Instead of fitting your operation into a tool designed for someone else’s workflow, you build the system your operation actually requires. For a small agency managing multiple clients and business lines simultaneously, that system is a six-database architecture that keeps everything connected without the bloat of enterprise project management software.

    This is what that architecture looks like and why each piece exists.

    What is the 6-database Notion architecture? The 6-database architecture is a Notion workspace structure designed for small agencies and solo operators managing multiple clients or business lines. Six interconnected databases — tasks, content, revenue, CRM, knowledge, and a daily dashboard — cover every operational layer of the business, linked by shared properties so information flows between them without duplication.

    Why Six Databases and Not More

    The instinct when building a Notion system from scratch is to create a database for everything. A database for meetings. A database for ideas. A database for invoices. A database for each client. This is how Notion workspaces become unusable — too many places things could live, no clear answer for where they actually belong.

    Six databases is the right number for a small agency because it maps cleanly to the six operational questions you need to answer at any moment: What do I need to do? What content is in the pipeline? Where does revenue stand? Who are my contacts? What do I know? What matters today?

    Every piece of information in the operation belongs in one of those six categories. If something doesn’t fit, it either belongs in a sub-page of an existing database record or it doesn’t need to be documented at all.

    Database 1: Master Actions

    Every task across every client and business line lives in one database. Not separate task lists per client, not separate boards per project — one database, partitioned by entity tag.

    The key properties: Priority (P1 through P4), Status (Inbox, Next Up, In Progress, Blocked, Done), Entity (which business line or client), Due Date, and a relation field linking to whichever other database the task belongs to — a content piece, a deal, a contact.

    The priority logic is worth being explicit about. P1 means revenue or reputation suffers today if this doesn’t get done. P2 means this creates leverage — a system, an asset, something that compounds. P3 means operational work that needs to happen but doesn’t compound. P4 means it should be delegated or killed. If your P1 list has more than five items, something is mislabeled.

    The daily operating rule: never more than five tasks in Next Up at once. The system forces prioritization rather than enabling the comfortable illusion that everything is equally important.

    Database 2: Content Pipeline

    Every piece of content — articles, reports, audits, deliverables — moves through a defined status sequence before it reaches the client or goes live. Brief, Draft, Optimized, Review, Scheduled, Published.

    The Content Pipeline database tracks where every piece is in that sequence, which client it belongs to, the target keyword or topic, the target platform, word count, and publication date. The relation field links back to the Master Actions database so the task of writing a specific piece and the piece itself are connected.

    The hard rule: nothing publishes without a Content Pipeline record. This creates an audit trail that answers “what did we deliver in March?” in seconds rather than requiring a search through email threads or shared drives.

    Database 3: Revenue Pipeline

    Active deals, proposals, and retainer renewals tracked through defined stages: Lead, Qualified, Proposal Sent, Active, Renewal, Closed.

    Each record carries the deal value, the stage, the last activity date, and a relation to the Master CRM for the associated contacts. The weekly review checks whether any deal has sat in the same stage for more than seven days without activity — that stagnation is a signal that requires a decision, not more waiting.

    The Revenue Pipeline doesn’t replace an accounting system. It tracks the relationship status and deal momentum, not invoices or payments. Those live in dedicated accounting software. The pipeline answers “where are we in the conversation?” not “what was billed?”

    Database 4: Master CRM

    Every contact across every business line — clients, prospects, partners, vendors, network relationships — in one database, tagged by entity and relationship type.

    The CRM properties: Entity, Relationship Type (client, prospect, partner, vendor, network), Last Contact Date, and a relation field linking to any Revenue Pipeline deals associated with that contact.

    The weekly review includes a check for any contact who should have heard from you and didn’t. “Should have heard from you” is defined by relationship type — active clients warrant more frequent contact than cold prospects. The CRM makes that check systematic rather than dependent on memory.

    Database 5: Knowledge Lab

    SOPs, architecture decisions, reference documents, and session logs. This is the institutional knowledge layer — everything that would take significant time to reconstruct if the person who knows it left or forgot.

    Every Knowledge Lab record carries a Type (SOP, architecture decision, reference, session log), an Entity tag, a Status (evergreen, active, draft, deprecated), and a Last Verified date. The Last Verified date drives the maintenance cycle — any record older than 90 days gets flagged for a quick review.

    The Knowledge Lab is also the layer that makes the operation AI-readable. Every page carries a machine-readable metadata block at the top that allows Claude to orient itself to the content quickly during a live session. This is what transforms the Knowledge Lab from a static document library into an active operational asset.

    Database 6: Daily Dashboard (HQ)

    Not a database in the traditional sense — a command page that aggregates filtered views from the other five databases into a single daily interface. The goal is one page that answers “what needs attention right now?” without clicking through five separate databases.

    The HQ page contains: a filtered view of P1 and P2 tasks due today or overdue, the content queue for the next 48 hours, an inbox view of unprocessed items (tasks without a priority or status assigned), and a quick-access list of the most frequently used database views.

    The HQ page is where every working day starts. Everything else in the system is accessed from here or from the five source databases. It’s the navigation layer, not a database of its own.

    How the Databases Connect

    The architecture only works as a system if the databases talk to each other. The connection mechanism in Notion is relation properties — fields that link a record in one database to a record in another.

    The key relations: every Content Pipeline record links to a Master Actions task. Every Revenue Pipeline deal links to a Master CRM contact. Every Master Actions task can link to a Content Pipeline record, a Revenue Pipeline deal, or a Knowledge Lab SOP. These relations mean you can navigate from a task to the content piece it produces, from a deal to the contact it involves, from a procedure to the tasks that execute it — without leaving Notion or losing the thread.

    Rollup properties extend this further: a Content Pipeline view can show the priority of the associated task without opening the task record. A Revenue Pipeline view can show the last contact date from the CRM without opening the contact. The data stays connected visually, not just structurally.

    What This Architecture Replaces

    For a small agency, the 6-database architecture typically replaces: a project management tool (the tasks and content pipeline handle this), a CRM (the Master CRM handles this), a shared drive for SOPs (the Knowledge Lab handles this), and a deal tracker (the Revenue Pipeline handles this). It does not replace accounting software, calendar tools, or communication platforms — those remain separate because they do things Notion doesn’t.

    The consolidation matters not just for cost but for operational clarity. When every operational question has one answer and one place to look, the cognitive overhead of running the business drops significantly. The system becomes something you trust rather than something you maintain out of obligation.

    Want this built for your agency?

    We build the 6-database Notion architecture for small agencies — configured for your specific operation, with the relations, views, and daily operating rhythm set up and documented.

    Tygart Media runs this system live. We know what the build process looks like and what breaks without the right architecture from the start.

    See what we build →

    Frequently Asked Questions

    How is the 6-database Notion architecture different from using ClickUp or Asana?

    ClickUp and Asana are built around tasks and projects as the primary organizational unit. The 6-database architecture treats the business itself as the organizational unit — tasks, content, revenue, relationships, and knowledge are all connected layers of one system rather than separate tools or modules. The tradeoff is that Notion requires more upfront architecture work, but produces a system that fits your specific operation rather than a generic project management workflow.

    Can one person realistically maintain six databases?

    Yes — that’s what the architecture is designed for. The daily maintenance is five to fifteen minutes of triage and status updates. The weekly review is thirty minutes. Most of the database updating happens naturally as work progresses: publishing a piece updates the Content Pipeline, closing a deal updates the Revenue Pipeline. The system is designed for a solo operator or a very small team, not a department.

    What Notion plan do you need for the 6-database architecture?

    The Plus plan at around ten dollars per month per member is sufficient for everything described here — unlimited pages, unlimited blocks, and the relation and rollup properties that make the database connections work. The free plan limits relations and rollups in ways that would break the architecture. The Business plan adds features useful for larger teams but isn’t necessary for a small agency setup.

    How long does it take to build the 6-database architecture from scratch?

    Plan for twenty to forty hours to build, configure, and populate the initial system — creating the databases, setting up the properties and relations, building the filtered views, writing the first SOPs, and establishing the daily operating rhythm. Most operators who build it solo spend two to three months in iteration before it stabilizes. Starting from a pre-built architecture configured for your specific operation compresses that significantly.

    What’s the biggest mistake people make when building a Notion agency system?

    Creating too many databases. The instinct is to give everything its own database — one per client, one per project type, one for every category of information. This creates the same problem as a disorganized file system: too many places things could live, no clear answer for where they actually belong. Start with six. Add a seventh only when there’s a category of information that genuinely doesn’t fit in any of the six and that you need to query or filter regularly.

  • Notion SOP System: How We Document Everything Across Multiple Business Lines

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    Most SOP systems fail not because the SOPs are bad but because nobody can find them when they need them. They live in a Google Doc that was shared once, in a Notion page buried three levels deep, or in someone’s head because the written version was never kept current. The system exists on paper and nowhere else.

    We run SOPs for every repeatable process across multiple business lines — content publishing workflows, client onboarding steps, quality control checks, platform-specific operating rules. All of it lives in Notion, structured so that a person or an AI can find the right SOP in seconds and trust that it reflects how the work actually gets done today.

    This is how that system is built.

    What is a Notion SOP system? A Notion SOP system is a structured collection of standard operating procedures stored in Notion, organized so they are findable by context, searchable by keyword, and maintainable without a dedicated document owner. Unlike a folder of static documents, a well-built Notion SOP system is a living knowledge base that updates as the operation evolves.

    Why Notion Works Well for SOPs

    SOPs need to be three things: findable, readable, and maintainable. Notion handles all three better than most alternatives.

    Findable: Notion’s database structure lets you tag SOPs by entity, process type, and status, then filter to find exactly what you need. A filtered view showing all active SOPs for a specific business line is one click. A search across the entire SOP library is instant.

    Readable: Notion’s page format supports the structure SOPs actually need — numbered steps, toggle blocks for detail, callout boxes for warnings, tables for decision logic. The reading experience is better than a Google Doc and far better than a shared spreadsheet.

    Maintainable: Because SOPs live in a database, you can see at a glance which ones haven’t been verified recently, which are marked as drafts, and which are flagged for review. The metadata makes maintenance auditable rather than aspirational.

    The SOP Database Structure

    Every SOP in our system is a record in a single database — the Knowledge Lab. It’s not a folder of pages. It’s a database where each SOP is a row with properties that make it queryable.

    The core properties on each SOP record:

    Doc Name — the title of the SOP, written as a plain description of what the procedure covers. “Content Pipeline — Publishing Sequence” not “Publishing SOP v3.”

    Type — whether this is an SOP, an architecture decision, a reference document, or a session log. SOPs are filtered separately from other knowledge types.

    Entity — which business line or client this SOP belongs to. Allows filtering to show only the SOPs relevant to the current context.

    Layer — what kind of decision this documents. Options: architecture-decision, operational-rule, client-specific, platform-specific. Helps distinguish “how we always do this” from “how we do this for this one client.”

    Status — evergreen, active, draft, deprecated. Evergreen SOPs are procedures that don’t change often and can be trusted as written. Active SOPs are current but may be evolving. Draft SOPs are being written or tested. Deprecated SOPs are kept for reference but no longer in use.

    Last Verified — the date the SOP was last confirmed to reflect current practice. Any SOP with a Last Verified date more than 90 days ago gets flagged for review in the weekly system health check.

    How SOPs Are Written

    The format matters as much as the content. An SOP that buries the key step in paragraph four will be ignored in favor of asking someone who knows. We follow a consistent structure for every SOP:

    One-line summary at the top. What this procedure is for and when to use it. Readable in five seconds.

    Trigger conditions. What situation prompts someone to follow this SOP. Specific enough that there’s no ambiguity about whether this is the right document.

    Numbered steps. One action per step. Steps that require judgment get a callout box explaining the decision logic. Steps that have common failure modes get a warning callout explaining what goes wrong and how to catch it.

    Hard rules section. Any non-negotiable constraints — things that are never done, always done, or require explicit sign-off before proceeding. These get their own section at the bottom so they’re easy to find without reading the full procedure.

    Last updated note. Who verified this and when. Simple accountability that makes the maintenance question answerable.

    The Machine-Readable Layer

    Every SOP in our system carries a JSON metadata block at the very top of the page — before any human-readable content. This block follows a consistent structure that makes the SOP readable not just by people but by Claude during a live session.

    The metadata block includes the page type, status, a two-to-three sentence summary of what the SOP covers, the entities it applies to, any dependencies on other SOPs or documents, and a resume instruction — a single sentence describing the most important thing to know before executing this procedure.

    In practice, this means Claude can fetch an SOP mid-session, read the metadata block, and understand the procedure’s constraints and intent without reading the full document. For a system running dozens of active SOPs, this makes the difference between Claude operating on institutional knowledge and Claude operating on guesswork.

    Finding the Right SOP in the Right Moment

    The best SOP system is one you actually use when you need it. That requires the right SOP to be findable in under thirty seconds — not after a search, three clicks, and a scan of an unfamiliar page structure.

    We solve this with two mechanisms. First, a master SOP index — a filtered database view showing all active and evergreen SOPs, sorted by entity and process type, with one-line summaries visible in the list view. Opening the index and scanning it takes fifteen seconds. Second, the Claude Context Index includes every SOP by title and summary, so Claude can surface the right one during a session without a manual search.

    Both mechanisms depend on the same underlying structure: consistent naming, accurate status tags, and current summaries. The index is only as good as the metadata behind it.

    Keeping SOPs Current

    The maintenance problem is real. SOPs written accurately in January are often wrong by April — not because anyone changed them, but because the operation evolved and nobody updated the documentation.

    Our approach: the weekly system health review includes a check for any SOP with a Last Verified date more than 90 days old. Those get flagged for a five-minute review — read the procedure, compare it to how the work actually gets done, update if needed, reset the Last Verified date. Most reviews result in no changes. A few result in small updates. Occasionally one reveals a significant drift that needs a full rewrite.

    The 90-day cycle keeps the system from drifting too far before the problem is caught. It also makes SOP maintenance a predictable overhead rather than an occasional emergency project.

    When a New SOP Gets Written

    Not every procedure needs an SOP. We write a new SOP when a procedure meets two criteria: it will be repeated more than three times, and getting it wrong has a real cost — either in time, quality, or client relationship.

    One-off tasks don’t get SOPs. Simple two-step procedures that any competent operator would handle correctly without documentation don’t get SOPs. The SOP library should be comprehensive but not exhaustive — a collection of genuinely useful reference documents, not a compliance exercise.

    When a new SOP is warranted, we write it immediately after the first time we execute the procedure correctly — while the steps are fresh and the edge cases are visible. SOPs written from memory weeks later are usually missing exactly the details that matter most.

    SOPs as Training Infrastructure

    A well-maintained SOP library has a secondary function beyond daily operations: it’s the training infrastructure for anyone new joining the operation, or for handing off work to an AI agent running a process for the first time.

    When a new person joins, the SOP library is the answer to “how do we do things here?” — not a shadowing exercise or an informal knowledge transfer, but a structured, searchable, current reference that covers the actual procedures. When Claude is tasked with executing a process it hasn’t run before, the SOP is what it reads first.

    This dual function is why the investment in documentation quality pays off beyond the obvious. The SOP isn’t just for today’s operation — it’s the institutional knowledge layer that makes the operation transferable, scalable, and less dependent on any one person’s memory.

    Want this built for your operation?

    We build Notion SOP systems and full Knowledge Lab architectures — structured, machine-readable, and maintained to actually stay current.

    Tygart Media runs this system across multiple business lines. We know what makes an SOP library useful versus aspirational.

    See what we build →

    Frequently Asked Questions

    How many SOPs does a small agency need?

    A small agency running five to fifteen active clients typically needs fifteen to forty SOPs covering the core operational procedures — onboarding, content production, quality control, client communication, platform-specific rules, and system maintenance. More than sixty SOPs in an operation of that size usually indicates over-documentation: procedures that don’t need to be written down are getting written down.

    What’s the difference between an SOP and a checklist in Notion?

    A checklist is a reminder of what to do. An SOP explains how to do it, why each step matters, what to do when something goes wrong, and what the non-negotiable constraints are. Checklists work well for simple procedures with no decision points. SOPs work well for procedures with judgment calls, common failure modes, or significant consequences if done incorrectly. Most operations need both.

    Should SOPs be pages or database records in Notion?

    Database records. A page is a standalone document with no queryable properties. A database record is a document with structured metadata — status, entity, type, last verified date — that makes it filterable, sortable, and auditable. The operational overhead of maintaining SOPs as database records rather than loose pages pays off quickly once you need to find all active SOPs for a specific context or identify which ones haven’t been reviewed recently.

    How do you prevent SOPs from becoming outdated?

    Build the review into a regular rhythm rather than relying on ad hoc updates. A Last Verified date property on each SOP, combined with a weekly or monthly check for records older than a set threshold, creates a systematic maintenance loop. SOPs that are never reviewed drift silently — the regular review cycle catches drift before it causes operational problems.

    Can Claude use Notion SOPs during a live session?

    Yes, with the right setup. Claude can fetch a Notion page via the Notion MCP integration and read its content mid-session. SOPs written with a consistent metadata block at the top — a structured summary, trigger conditions, and key constraints — are especially effective because Claude can orient itself quickly without reading the full document. This is what makes a Notion SOP system genuinely useful for AI-native operations rather than just human reference.

  • Notion + Claude AI: How to Use Claude as Your Notion Operating System

    Notion + Claude AI: How to Use Claude as Your Notion Operating System

    Claude AI · Fitted Claude

    Notion is where the work lives. Claude is what thinks about it. That’s the simplest way to describe the integration — not Claude as a chatbot you open in a separate tab, but Claude as an active layer that reads your Notion workspace, reasons about what’s in it, and acts on it in real time.

    Most people using both tools treat them as separate. They take notes in Notion, then copy and paste context into Claude when they need help. That works, but it’s not an integration — it’s a clipboard operation. What we run is different: a structured Notion architecture that Claude can navigate directly, combined with a metadata standard that makes every key page machine-readable across sessions.

    This is how that system actually works.

    What does it mean to use Claude as a Notion operating system? Using Claude as a Notion OS means structuring your Notion workspace so Claude can fetch, read, and act on its contents during a live session — without you manually copying context. Your Notion workspace becomes Claude’s working memory: it knows where your SOPs live, what your current priorities are, and what decisions have already been made.

    Why the Default Approach Breaks Down

    The standard way people use Claude with Notion: open Claude, describe the project, paste in relevant content, do the work, close the session. Next session, start over.

    Claude has no memory between sessions by default. Every conversation starts from zero. If your operation has any meaningful complexity — multiple clients, ongoing projects, established decisions and constraints — rebuilding that context from scratch every session is expensive. It costs time, it introduces errors when you forget to mention something relevant, and it means Claude is always operating with incomplete information.

    The fix is not to paste more context. The fix is to architect your Notion workspace so Claude can retrieve the context it needs, when it needs it, without you managing that transfer manually.

    The Metadata Standard That Makes It Work

    The foundation of the integration is a consistent metadata structure at the top of every key Notion page. We call this standard claude_delta. Every SOP, architecture decision, project brief, and client reference document in our Knowledge Lab starts with a JSON block that looks like this:

    {
      "claude_delta": {
        "page_id": "unique-page-id",
        "page_type": "sop",
        "status": "evergreen",
        "summary": "Two to three sentence plain-language description of what this page contains and when to use it.",
        "entities": ["relevant business", "relevant project", "relevant tool"],
        "dependencies": ["other-page-id-this-depends-on"],
        "resume_instruction": "The single most important thing Claude needs to know to continue work on this topic without re-reading the entire page.",
        "last_updated": "2026-04-12T00:00:00Z"
      }
    }

    The metadata block serves two purposes. First, it gives Claude a structured, consistent entry point to any page — the summary and resume instruction mean Claude can orient itself in seconds rather than reading thousands of words. Second, it makes the page indexable: when we need to find the right page for a given task, Claude can scan metadata blocks rather than full page content.

    The Claude Context Index

    The metadata standard only works if Claude knows where to start. The Claude Context Index is a master registry page in our Notion workspace — the first thing Claude fetches at the start of any session that involves the knowledge base.

    The index contains a structured list of every major knowledge page: its title, page ID, page type, status, and a one-line summary. When Claude reads the index, it knows what exists, where it is, and which pages are relevant to the current task — without having to search or guess.

    In practice, a session starts like this: “Read the Claude Context Index and then let’s work on [task].” Claude fetches the index, identifies the relevant pages for that task, fetches those pages, and begins work with full context. The context transfer that used to take ten minutes of copy-paste happens in seconds.

    What Claude Can Actually Do Inside Notion

    With the Notion MCP (Model Context Protocol) integration active, Claude can do more than read — it can write back to Notion directly during a session. In our operation, Claude routinely:

    Creates new knowledge pages — when a session produces a decision, an SOP, or a reference document worth keeping, Claude writes it to Notion with the claude_delta metadata already applied. The knowledge base grows automatically as work happens.

    Updates project status — when a content piece is published, Claude logs the publication in the Content Pipeline database. When a task is complete, Claude marks it done. The databases stay current without a separate manual logging step.

    Reads SOPs mid-session — if a session reaches a step with an established procedure, Claude fetches the relevant SOP rather than improvising. This enforces consistency across sessions and across different types of work.

    Scans the task database — at the start of a working session, Claude can read the current P1 and P2 task list and surface anything that should be addressed before the session’s primary work begins.

    The Persistent Memory Layer

    The hardest problem in running an AI-native operation is context persistence. Claude’s context window is large but finite, and it resets between sessions. For any operation with meaningful ongoing complexity, that reset is a real problem.

    Our solution is a three-layer memory architecture:

    Layer 1: Notion Knowledge Lab. Human-readable SOPs, architecture decisions, project briefs, and reference documents. Claude fetches these at session start. Persistent across all sessions indefinitely.

    Layer 2: BigQuery operations ledger. A machine-readable database of operational history — what was published, what was changed, what decisions were made, and when. Claude can query this layer for operational data that would be too verbose to store in Notion pages. Currently holds several hundred knowledge pages chunked and embedded for semantic search.

    Layer 3: Session memory summaries. At the end of a significant session, Claude writes a summary of what was decided and done to a Notion session log page. The next session can start by reading the most recent session log, picking up exactly where the previous session ended.

    Together these three layers mean Claude never truly starts from zero — it has access to the institutional knowledge of the operation, the operational history, and the most recent session context.

    Building This for Your Own Operation

    The full architecture takes time to build correctly, but the core of it — the metadata standard and the Context Index — can be implemented in a few hours and provides immediate value.

    Start with five to ten of your most important Notion pages: your key SOPs, your main project references, your client guidelines. Add a claude_delta metadata block to the top of each. Create a simple index page that lists them with their IDs and summaries. Then start your next Claude session by telling Claude to read the index first.

    The difference in session quality is immediate. Claude operates with context it would otherwise need you to provide manually, makes decisions consistent with your established constraints, and produces output that fits your actual operation rather than a generic interpretation of it.

    From there, you can layer in the Notion MCP integration for write-back capability, build out the BigQuery knowledge ledger for operational history, and develop the session logging practice for continuity. But the metadata standard and the index are where the leverage is — everything else builds on top of them.

    What This Is Not

    This is not a plug-and-play integration. Notion’s native AI features and Claude are different products — Notion AI is built into the Notion interface and works on your pages directly, while Claude operates via API or the claude.ai interface with Notion access layered on through MCP. The architecture described here is a custom implementation, not a feature you turn on.

    It also requires discipline to maintain. The metadata standard only works if every important page follows it. The Context Index only works if it’s kept current. The session logs only work if they’re written consistently. The system degrades quickly if the documentation practice slips. That maintenance overhead is real — budget for it explicitly or the architecture will drift.

    Want this set up for your operation?

    We build and configure the Notion + Claude architecture — the metadata standard, the Context Index, the MCP integration, and the session logging system — as a done-for-you implementation.

    We run this system live in our own operation every day. We know what breaks without proper architecture and how to build it to last.

    See what we build →

    Frequently Asked Questions

    Does Claude have native Notion integration?

    Claude can connect to Notion through the Model Context Protocol (MCP), which allows it to read and write Notion pages and databases during a live session. This is not a built-in feature that requires no setup — it requires configuring the Notion MCP server and connecting it to your Claude environment. Once configured, Claude can fetch, create, and update Notion content directly.

    What is the difference between Notion AI and Claude in Notion?

    Notion AI is Anthropic-powered AI built natively into the Notion interface — it works directly on your pages for tasks like summarizing, drafting, and Q&A over your workspace. Claude operating via MCP is a separate implementation where Claude, running in its own interface, connects to your Notion workspace as an external tool. The MCP approach gives Claude more operational flexibility — it can combine Notion data with other tools, write complex logic, and operate across a full session — but requires more setup than Notion AI’s native features.

    What is the claude_delta metadata standard?

    Claude_delta is a JSON metadata block added to the top of key Notion pages that makes them machine-readable for Claude. It includes the page type, status, a plain-language summary, relevant entities, dependencies, a resume instruction for picking up work in progress, and a timestamp. The standard makes it possible for Claude to orient itself to any page quickly and consistently, without reading the full content every time.

    Can Claude write back to Notion automatically?

    Yes, with the Notion MCP integration active. Claude can create new pages, update existing records, add database entries, and modify page content during a session. This enables workflows where Claude logs its own outputs — publishing records, session summaries, decision logs — directly to Notion without a manual step.

    How do you handle Claude’s context limit with a large Notion workspace?

    The metadata standard and Context Index approach addresses this directly. Rather than loading the entire workspace into context, Claude fetches only the pages relevant to the current task. The index tells Claude what exists; the metadata tells Claude whether a page is worth fetching in full. For operational history too large for context, a separate database layer (we use BigQuery) handles storage and semantic retrieval, with Claude querying it for specific data rather than ingesting it wholesale.

  • Notion Client Portal Setup for Agencies: How We Build Ours

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    Most agency client portals are either too complicated to maintain or too bare to be useful. A shared Google Drive folder isn’t a portal. A ClickUp guest view requires the client to learn ClickUp. A custom-built portal requires a developer. Notion sits in the middle — flexible enough to build something professional, simple enough that clients can actually use it without training.

    This is how we build Notion client portals for our own operation. Not a template walkthrough — a description of the actual architecture, what we include, what we leave out, and why.

    What is a Notion client portal? A Notion client portal is a shared Notion page or workspace section that gives a client controlled visibility into their project — deliverables, timelines, assets, and communication — without exposing the rest of your internal operation. It functions as a lightweight client-facing dashboard built inside your existing Notion workspace.

    What a Notion Client Portal Actually Needs to Do

    Before building anything, it helps to be clear about what the portal is for. In our operation, a client portal has three jobs:

    Reduce inbound questions. If a client can see where their project stands without emailing, they will. A well-structured portal cuts “what’s the status?” messages significantly.

    Create a delivery record. Every deliverable — article, report, strategy doc — has a logged home. When a client asks what was delivered in March, the answer is one click away.

    Protect internal operations. The portal is a window, not a door. Clients see what’s relevant to them. They don’t see your internal task database, your pricing notes, your other clients, or your operational SOPs.

    The Core Portal Structure

    Every client portal we build follows the same structural template, customized by scope. The core components are:

    Project Status Dashboard

    A simple table or board view showing the current state of all active deliverables. Columns: deliverable name, status (In Progress / Review / Delivered), due date, and a link to the asset. Clients can see at a glance what’s moving and what’s done without needing to ask.

    This view is a filtered view of our internal Content Pipeline database — the client sees only their rows, not the full database. We use Notion’s filter-by-property feature to scope the view to their entity tag. They get a live view of their work without any access to the broader pipeline.

    Deliverables Library

    A running archive of everything completed and delivered. Articles, audits, reports, strategy documents — each as a linked page or embedded file. Organized by month. This solves the “can you resend that?” problem permanently and gives clients a sense of the body of work accumulating over a retainer.

    Communication Log

    A simple chronological page where significant decisions, feedback rounds, and strategic pivots get logged. Not a chat — a record. When a client says “I thought we decided X,” the communication log is the answer. This protects both parties and reduces scope creep from memory drift.

    Reference Documents

    Brand guidelines, target keyword lists, approved personas, style notes — anything the client has provided or that governs the work. Stored here so the answer to “do we have their brand guide?” is always yes.

    Next Steps

    A short, always-current list of what happens next. Three to five items max. What we’re working on, what we need from them, and when they can expect the next delivery. Clients check this more than anything else in the portal.

    How Access and Permissions Work

    Notion’s sharing model for client portals works at the page level, not the database level. This is the key architectural decision that determines how isolated the portal actually is.

    The correct approach: build the client portal as a standalone page that is not a child of your main Command Center. Share that page with the client via email invite at the “Can view” or “Can comment” level. The portal contains only filtered views and manually duplicated content — never direct database access.

    What to avoid: sharing a database directly with a client, even with filters applied. Notion’s permissions model allows determined users to remove filters from shared database views, exposing rows you didn’t intend to share. Always use a standalone page with embedded filtered views, not a raw database share.

    The Air-Gap Principle

    We call our approach to client portals “air-gapped” — the portal is architecturally separated from the internal operation even though it draws from the same underlying data.

    In practice, this means the portal page never has a back-link to the Command Center. The filtered views are set up so the client can see their data but cannot navigate to the parent database. Any document shared in the portal is either a shared Notion page with its own permissions or an exported file — never a raw internal page with full internal linking.

    The air gap matters because Notion’s page graph is navigable. If you share a page that contains a link to an internal page the client shouldn’t see, they can follow that link if it’s not properly permissioned. Build the portal as if it’s a separate product, even if it isn’t.

    What Not to Put in a Client Portal

    Equally important as what to include: what to leave out.

    Internal task notes. Your notes about why something is late, what went wrong, or what you think about the brief belong in your internal system, not in a client-visible page.

    Pricing and contract details. These live in your Revenue Pipeline and are shared via PDF or dedicated document — not embedded in an operational portal.

    Other clients’ work. Obvious, but worth stating explicitly given how easy it is to accidentally link across projects in a shared workspace.

    Unfinished deliverables. The portal is a delivery mechanism, not a work-in-progress view. Drafts go into the portal when they’re ready for client review, not before.

    Maintaining Portals at Scale

    The main friction with Notion client portals at scale is maintenance overhead. If you’re running ten or more active clients, keeping ten portals current manually is a real time cost.

    The solution is to minimize what requires manual updating. The Project Status Dashboard and Deliverables Library should pull from your internal pipeline database via filtered views — when you update the internal record, the portal updates automatically. The only things requiring manual attention are the Communication Log and Next Steps, which genuinely need a human decision about what to write.

    In our operation, portal maintenance takes roughly five minutes per client per week — the time it takes to update Next Steps and log any significant decisions from that week’s work. Everything else is live from the internal system.

    When Notion Portals Work Well and When They Don’t

    Notion client portals work well for content agencies, SEO operations, strategy consultants, and any service business where the deliverables are primarily documents. The portal model fits naturally when what you’re delivering is readable, linkable, and accumulates over time.

    They work less well for project-heavy engagements where the client needs to interact with tasks, leave comments on specific items, or participate in the workflow. For those cases, a purpose-built client portal tool — or a dedicated shared Notion workspace rather than a view-only portal — is a better fit. Notion can support collaborative client workspaces, but it requires a different architecture than the air-gapped portal model described here.

    Want this built for your agency?

    We set up Notion client portals and full Command Center architectures for agencies — configured for your operation, not a template to customize yourself.

    Tygart Media runs this system live across multiple active clients. We know what the build process looks like and what breaks without proper architecture.

    See what we build →

    Frequently Asked Questions

    Can clients edit content in a Notion client portal?

    Yes, if you give them “Can edit” or “Can comment” permissions. For most agency relationships, “Can comment” is the right level — clients can leave feedback directly on pages without being able to accidentally delete or restructure content. “Can view” works for portals that are purely informational delivery mechanisms.

    Is it safe to share a Notion database view with a client?

    With caution. Filtered database views can have their filters removed by users with edit access. For client-facing portals, use standalone pages with embedded filtered views set to view-only, rather than sharing the database itself. This is the air-gap approach — the client sees the data but cannot access the underlying database structure.

    How do you handle multiple clients in one Notion workspace?

    Each client gets their own portal page, shared individually. Internally, all client data lives in shared databases partitioned by an entity or client tag. Filtered views in each portal show only that client’s records. Clients never see each other’s portals or data because each portal is a separately permissioned page.

    What’s the difference between a Notion client portal and a shared Notion workspace?

    A client portal is a view-only or comment-only window into your operation — the client sees deliverables and status but doesn’t work inside Notion alongside you. A shared workspace is a collaborative environment where both agency and client actively use Notion together. Portals are simpler to maintain and better for most agency relationships. Shared workspaces make sense for longer-term, higher-touch engagements where the client is an active participant in the work.

    How long does it take to set up a Notion client portal?

    A well-structured portal takes two to four hours to build from scratch for the first client. Once you have a working template, duplicating and customizing it for additional clients takes thirty to sixty minutes. The time investment is in designing the architecture correctly the first time — portals built without a clear structure tend to get abandoned within a few months.

  • How I Run 27 Client Sites from One Notion Command Center

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    I run 27 client WordPress sites from a single Notion workspace. No project management software, no agency platform, no dedicated CRM. Just Notion — architected deliberately across six interconnected databases — handling task triage, content pipelines, client relationships, revenue tracking, and the knowledge infrastructure that feeds an AI-native content operation.

    This is not a productivity tutorial. This is a description of a real system, built over two years, that runs across seven distinct business entities simultaneously. If you’re an agency owner, solo operator, or content business trying to figure out how to use Notion for something more serious than a to-do list, this is what the other end of that road looks like.

    What is a Notion Command Center? A Notion Command Center is a multi-database workspace architecture that functions as a single operating system for a business or portfolio of businesses. Rather than using Notion as a note-taking app, a Command Center connects tasks, clients, content, and knowledge into a unified system with defined workflows, priority rules, and daily operating rhythms.

    Why Notion Instead of Dedicated Agency Software

    The honest answer: I tried the alternatives. ClickUp has more native project management features. Asana handles task dependencies better out of the box. Monday.com is more polished for client-facing views.

    None of them let me build exactly the system my operation requires. And at the scale I’m running — 27 client sites, seven business entities, a live AI publishing pipeline — the ability to customize the architecture matters more than any individual feature.

    Notion also has a meaningful advantage that most people underestimate: it integrates with Claude natively. My entire operation runs on Claude as the AI layer, and a Notion workspace structured correctly becomes something Claude can read, reason about, and act on. That combination — Notion as the OS, Claude as the intelligence — is what makes this a genuinely AI-native operation rather than just an AI-assisted one.

    The 6-Database Architecture

    The Command Center runs on six core databases. Everything else in the workspace is either a view of these databases, a child page underneath them, or a standalone reference document. The six databases are:

    1. Master Actions

    Every task across all seven entities lives here. Priority levels run P1 (revenue or reputation at risk today) through P4 (delegate or kill). Each task carries an Entity tag, a Status, a Due Date, and a linked record in whichever other database it belongs to — a client, a content piece, a deal.

    The daily operating rule: never more than five tasks marked “Next Up” across the entire workspace at once. If your Next Up list has eight items, something is mislabeled. P1 means the thing doesn’t get done and real consequences follow today.

    2. Content Pipeline

    Every article across all 27 client sites flows through this database before it hits WordPress. Status stages run from Brief → Draft → Optimized → Scheduled → Published. The database links to the client entity, carries the target keyword, the target site URL, word count, and a publication date.

    Nothing publishes without a Notion record. This is a hard rule established after the alternative — articles written in sessions and pushed directly — created audit gaps that took hours to resolve. Notion first, WordPress second.

    3. Revenue Pipeline

    Client deals, proposals, and retainer renewals. Stage-based (Lead → Qualified → Proposal Sent → Active → Renewal). Links to the Master CRM for contact records. The weekly review checks whether any deal has sat in the same stage for more than seven days without activity — that’s a warning sign that gets flagged.

    4. Master CRM

    Every contact across all seven entities. Clients, prospects, golf league members, partners, vendors. Tagged by entity, relationship type, and last contact date. The weekly review catches anyone who should have heard from me and didn’t.

    5. Knowledge Lab

    SOPs, architecture decisions, session logs, and reference documents. This is where the institutional knowledge lives — the things that would take hours to reconstruct if I had to start from scratch. The Knowledge Lab uses a metadata standard (I call it claude_delta) that makes every page machine-readable, so Claude can fetch and reason about the content in a live session without losing context.

    6. William’s HQ

    The daily dashboard. A filtered view of P1 and P2 tasks due today or overdue, the content queue for the next 48 hours, and the inbox triage. This is the page that opens first every morning. Everything else in the system is accessed from here.

    The Seven Entity Structure

    The system manages seven distinct business entities, each with its own Focus Room — a sub-page containing that entity’s active projects, open tasks filtered by entity tag, and key reference documents. The entities are:

    • The parent agency — managing all client sites and retainer relationships
    • Personal brand — direct services, thought leadership, and new business
    • Client A — content operation for a contractor in a regional market
    • Client B — content operation for a service business in a metro market
    • Industry network — B2B community and event operation
    • Content property — topical authority site in a specific vertical
    • Personal — finances, health commitments, personal projects

    The entity structure means a task logged under “a regional client content operation” never bleeds into the the parent agency content queue. The databases are shared, but the entity tag acts as a partition. This matters operationally when you’re switching contexts fifteen times a day — the system tells you where you are and what belongs there.

    The Daily Operating Rhythm

    The Command Center only works if you use it on a rhythm. Mine runs on three loops:

    Morning Triage (10–15 minutes)

    Open William’s HQ. Zero the inbox — every untagged item gets a priority, a status, and an entity. Read the P1 and P2 list. Mentally commit to the top three. Check the content queue for anything publishing in the next 48 hours that isn’t scheduled. That’s a P1 fix before anything else happens.

    End-of-Day Close (5 minutes)

    Mark done tasks complete. Push anything untouched but intended — update the due date or reprioritize down. Check the content queue for tomorrow’s publications. If anything new was created during the day — a contact, a content piece, a deal — verify it’s logged in the right database with the right entity tag.

    Weekly Review (30 minutes, Sunday evening)

    Revenue: any deal stuck in the same stage as last week? Content: next week’s queue fully populated? Tasks: archive all Done tasks older than 14 days. Relationships: anyone who should have heard from me and didn’t? System health: any automation that failed silently?

    The weekly review is the repair mechanism. It catches the things the daily rhythm misses and resets the system before the next week compounds the drift.

    How Claude Plugs Into This

    The Knowledge Lab’s claude_delta metadata standard is what makes the Notion–Claude integration functional rather than theoretical. Every page in the Knowledge Lab carries a JSON metadata block at the top that tells Claude the page type, status, summary, key entities, and a resume instruction for picking up work in progress.

    In practice, this means I can start a session by telling Claude to read a specific Knowledge Lab page, and Claude has enough structured context to continue from exactly where the last session ended — without me re-explaining the project, the client, the constraints, or the decisions already made. The Notion workspace functions as persistent memory across Claude sessions.

    This is the part of the architecture that most people haven’t built yet. Notion as a note-taking app is one thing. Notion as a structured knowledge layer that an AI can navigate and act on is a meaningfully different proposition — and it’s the direction serious operators are moving.

    What This Architecture Costs to Build

    Honest answer: the architecture itself took about three months of active iteration to stabilize. The first version had too many databases, unclear relationships between them, and no real operating rhythm to enforce the discipline. The current version is the result of tearing down and rebuilding twice.

    The tooling cost is low. Notion’s Plus plan at $10/month per member handles everything described here. The BigQuery knowledge ledger that backs the AI memory layer runs on Google Cloud at effectively zero cost at this scale. Claude API usage for content operations runs roughly $50–150/month depending on session volume.

    What actually costs something is the setup time and the learning curve of building databases that relate to each other correctly. Most Notion setups fail not because the tool is limited but because the architecture wasn’t designed before the databases were created.

    Whether This Is Right for Your Agency

    The Command Center architecture works well for solo operators and small agencies managing multiple clients or business lines simultaneously. It works especially well when you’re running an AI-native content operation and need Notion to function as more than task management.

    It’s not the right choice if you need strong native time-tracking, Gantt charts, or client-facing portals that look polished without customization. Those cases have better-suited tools.

    But if you’re running a content agency, a multi-client SEO operation, or any business where the work is primarily knowledge work — briefs, articles, strategies, SOPs, client communications — and you want one system that sees all of it, the 6-database Command Center architecture is worth the build time.

    Want this built for your operation?

    We set up Notion Command Centers for agencies and operators — the full architecture, configured and documented, not a template to figure out yourself.

    Tygart Media has built and runs this system live across 27 client sites. We know what the setup process actually looks like.

    See what we build →

    Frequently Asked Questions

    How many databases does a Notion Command Center need?

    A functional Command Center for an agency or multi-client operation typically needs six core databases: a task database, a content pipeline, a revenue pipeline, a CRM, a knowledge base, and a daily dashboard. More than eight databases usually indicates an architecture problem — complexity that should be handled with views and filters, not additional databases.

    Can Notion handle 27 client sites without getting slow?

    Yes, with proper architecture. The key is using filtered views rather than separate databases for each client, and keeping database page counts manageable by archiving completed records regularly. Notion’s performance degrades when a single database exceeds a few thousand active records — archive aggressively and it stays fast.

    How does Notion integrate with Claude AI?

    Notion and Claude integrate through structured page formatting and the Notion API. By standardizing metadata at the top of key pages — page type, status, summary, key entities — Claude can fetch and interpret Notion content in a live session. More advanced setups use the Notion API to read and write records programmatically during Claude sessions, effectively making Notion the persistent memory layer for AI operations.

    What’s the difference between a Notion Command Center and a regular Notion workspace?

    A regular Notion workspace is typically organized around document types — pages, notes, tasks — without enforced relationships between them. A Command Center is organized around business operations — entities, pipelines, and workflows — with databases that relate to each other and a defined operating rhythm that governs how the system gets used each day.

    How long does it take to set up a Notion Command Center?

    Building the architecture from scratch takes 20–40 hours of focused setup time, including database design, relationship configuration, view creation, and SOP documentation. Most operators who attempt it solo take 2–3 months of iteration before the system stabilizes. Working from an existing architecture and having it configured for your specific operation compresses that significantly.

    Is Notion good for content agencies specifically?

    Notion is well-suited for content agencies because the core work — briefs, drafts, SOPs, client communication, publishing schedules — is document-centric. The Content Pipeline database, linked to a CRM and task system, gives visibility into every piece of content across every client at once, which is difficult to replicate in project management tools not built for document-heavy workflows.

  • The Distillery: Hand-Crafted Batches of Distilled Knowledge, Available as API Feeds

    The Distillery: Hand-Crafted Batches of Distilled Knowledge, Available as API Feeds

    The Distillery — Brew № — · Distillery

    Most content on the internet is noise. It exists to rank, to fill space, to signal presence. It is not dense enough to be useful to the people who actually need to know the thing it claims to cover. And it is certainly not dense enough to be valuable as a feed that an AI system pulls from to answer real questions.

    The Distillery is different. It is a named section of Tygart Media where we produce small batches of genuinely high-density knowledge on specific topics — researched from real search demand data, written to a standard where every sentence earns its place, and published in structured form that both humans and AI systems can use.

    Each batch is available as a category API feed. Subscribers get authenticated access to the full batch as structured JSON — updated as new knowledge is added, versioned so auditors and AI systems can cite the exact vintage they’re drawing from.

    What a Batch Is

    A batch is a curated body of knowledge on a specific topic, built from three ingredients: real demand data (what people are actually searching for and what advertisers are paying to reach), primary research (direct engagement with the subject matter, not summarizing what others have written), and editorial discipline (the $5 filter — would someone pay $5 a month to pipe this feed into their AI? if not, it doesn’t ship).

    Each batch has a name, a number, and a version. Batch 001 is the Restoration Carbon Protocol — the only published Scope 3 emissions calculation standard for property restoration work. Batch 005 is the Restoration Industry Knowledge Base — a structured body of operational knowledge for restoration contractors who want to build AI-native systems without starting from scratch.

    Batches are not blog posts. They are not opinion columns. They are not rephrased Wikipedia entries. They are the kind of specific, accurate, hard-earned knowledge that takes real work to produce and that AI systems actively need but largely cannot find in their training data.

    How the API Works

    Every Distillery batch is accessible through the Tygart Content Network API. Subscribers receive an API key at signup. The key unlocks authenticated access to the batch endpoints they’ve subscribed to. Each endpoint returns structured JSON — articles by category, filterable by date and topic, with consistent metadata that AI agents can process directly.

    The response format is designed for machine consumption: clean plain text content, explicit categorization, publication timestamps for recency evaluation, and topic tags that allow agents to assess relevance before processing. The same feed that powers a human reader’s understanding of a topic powers an AI agent’s ability to answer questions about it accurately.

    Rate limits are generous at the $5 community tier — 100 requests per day, sufficient for an AI assistant pulling daily updates. Professional tiers at $50/month offer higher limits, webhook push when new content publishes, and bulk historical pulls for training and fine-tuning use cases.

    Why Information Density Is the Moat

    The content that survives in an AI-mediated information environment is the content that contains something worth extracting. Not something that sounds authoritative — something that actually is. The difference is information density: the ratio of useful, specific, actionable knowledge to total words published.

    Every Distillery batch is held to the same standard: if an AI system pulled from this feed to answer a question in this domain, would the answer be more accurate and more specific than if the AI had relied on its training data alone? If yes, the batch has value. If no, we haven’t done enough work yet.

    This standard is harder to meet than it sounds. It eliminates most of what gets published under the banner of “thought leadership” and “content marketing.” It requires knowing the subject well enough to say things that couldn’t be said by someone who spent an afternoon with a search engine. It is the reason The Distillery produces small batches rather than high volumes.

    Current Batches

    Batch 001 — Restoration Carbon Protocol (RCP)
    The only published Scope 3 ESG emissions calculation standard for property restoration work. Covers all five core restoration job types with actual emission factor tables, complete worked examples, and the 12-point data capture standard. Designed for restoration contractors serving commercial clients with 2027 SB 253 Scope 3 reporting obligations. 23 articles. Updated monthly.

    Batch 002 — The Knowledge Economy API Layer
    The conceptual and practical framework for turning human expertise into machine-consumable, API-distributable knowledge products. For anyone with domain expertise considering how to package and monetize it in an AI-native information environment. 8 articles. Updated as the landscape develops.

    Batch 003 — Mason County Minute
    Current, structured, consistently maintained coverage of Mason County, Washington — local government, business, community, real estate, and public affairs. The only machine-readable hyperlocal intelligence feed for this geography. Updated weekly.

    Batch 004 — Belfair Bugle
    Hyperlocal coverage of Belfair, WA and the North Mason community. Current events, local government, community intelligence. The only structured feed for this geography. Updated weekly.

    Batch 005 — Restoration Industry Knowledge Base (coming)
    Operational knowledge infrastructure for restoration contractors — the 50 knowledge nodes every restoration company should have documented, the AI-native knowledge architecture that replaces manual training, and the integration patterns connecting job management systems to knowledge delivery. In development.

    Batch 006 — AI Agency Playbook (coming)
    The operating methodology behind Tygart Media — how a single operator runs 27+ client sites, deploys AI-native content at scale, and builds knowledge infrastructure rather than content volume. For agency owners and solo operators building AI-native practices. In development.

    Who This Is For

    The Distillery API is for three kinds of subscribers:

    Developers building AI tools who need reliable, current, domain-specific knowledge feeds to ground their applications in accurate information. The Restoration Carbon Protocol feed, for example, gives any AI assistant building tool accurate restoration-specific ESG data without the developer having to research and curate it themselves.

    Businesses who want AI systems that actually know their industry. A restoration company whose AI assistant draws from the RCP feed knows more about Scope 3 emissions calculation for their job types than any general-purpose AI. A commercial property manager whose AI assistant pulls from the RCP feed can answer contractor ESG questions accurately instead of hallucinating plausible-sounding nonsense.

    Content teams and agencies who want structured, current, reliable source material for their own content production — not to copy, but to ensure accuracy and specificity in their coverage of these domains.

    The Standard We Hold Ourselves To

    Every article in every batch passes one test before it ships: would someone pay $5 a month to pipe this feed into their AI? Not to read it themselves — to have their AI draw from it continuously as a trusted source in this domain.

    If the answer is no — if the content is too generic, too thin, or too derivative to justify a subscription — it doesn’t ship. The batch waits until the knowledge is actually there.

    This makes The Distillery slow. It makes it small. And it makes it worth subscribing to.

  • RCP Proxy Estimation Guide: How to Calculate When Primary Data Is Missing

    RCP Proxy Estimation Guide: How to Calculate When Primary Data Is Missing

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    The RCP requires 12 data points per job. In practice, some of those data points will be unavailable — particularly for historical jobs being calculated retrospectively, or for field situations where documentation wasn’t captured as completely as the standard requires. The proxy estimation methodology provides documented substitution methods that produce defensible, auditor-acceptable estimates when primary data is missing.

    Key principle: A documented estimate with a stated assumption is always preferable to a blank field in an RCP report. ESG auditors understand that emissions calculation involves uncertainty — what they require is transparency about where estimation was used and what the basis of that estimation was. Undocumented guesses are not acceptable. Documented proxies are.

    Data Quality Tiers

    The RCP uses three data quality tiers, consistent with GHG Protocol Scope 3 guidance:

    Tier Description Audit Acceptability
    Tier 1 — Primary measured data Actual measurements from job records: GPS mileage, disposal facility receipts with weights, materials purchase orders by job Highest — preferred for all data points
    Tier 2 — Primary estimated data Calculated from documented job parameters using RCP proxy methods: affected area × consumption rate, crew size × duration × unit rate Acceptable — must document calculation method and basis
    Tier 3 — Spend-based / invoice-based proxy Dollar amount × industry average emission factor — the fallback of last resort Lowest — use only when no job-specific data is available; flag prominently in data quality notes

    Proxy Methods by Data Point

    Data Point 1 — Vehicle Mileage (Transportation)

    Primary source: GPS fleet tracking data, dispatch records, driver logs.

    Proxy method: Use Google Maps or equivalent mapping tool to calculate round-trip distance from your facility (or prior job address for multi-stop days) to the job site. Multiply by the number of crew trips documented in time records or invoices. This is a Tier 2 estimate.

    Default proxy (Tier 3, last resort): Industry average mobilization distance for restoration contractors is 22 miles one-way (44 miles round trip). Apply this default only when no address or routing information is available. Note as Tier 3 estimate in data quality section.

    Data Point 2 — Waste Transport Mileage

    Primary source: Waste manifests and hauler receipts (these typically include origin and destination).

    Proxy method: Use the distance from the job site to the nearest licensed disposal facility of the appropriate type (standard C&D landfill, licensed ACM facility, medical waste facility). Use online waste facility directories (EPA RCRA Info for hazmat, state environmental agency databases for C&D landfills) to identify the nearest appropriate facility.

    Default proxies by facility type (Tier 3): Standard C&D landfill: 18 miles. Licensed ACM facility: 60 miles. Licensed PCB incineration: 150 miles. Medical waste facility: 55 miles.

    Data Point 3 — Equipment Power Source

    Primary source: Job documentation noting whether equipment ran on building power or contractor generator; generator fuel logs.

    Proxy method: Default assumption is building electrical supply unless your company policy or the job type (remote location, building power unavailable) indicates otherwise. Note the assumption explicitly. If generator use is suspected but not documented, use the following generator fuel proxy: standard drying equipment setup (3 dehumidifiers + 6 air movers) consuming approximately 2.5 gallons of diesel per 8-hour shift × number of drying days × 10.21 kg CO2e per gallon diesel.

    Data Points 4–5 — Chemical Treatments and PPE Consumption

    Application rate proxies by job type and surface type:

    Job Type / Surface Antimicrobial Rate Tyvek Suits per Tech per Day Glove Pairs per Tech per Day N95/P100 per Tech per Day
    Cat 1 water — porous surfaces 0.008 L/sq ft 0.5 2 0.5
    Cat 2 water — porous surfaces 0.015 L/sq ft 1.0 3 1.0
    Cat 3 water — porous surfaces 0.025 L/sq ft (×2 applications) 2.0 5 2.0
    Mold Condition 3 — first application 0.020 L/sq ft 2.0 4 1.5
    Mold Condition 3 — second application 0.015 L/sq ft 2.0 4 1.5
    Fire — smoke cleaning (chemical sponge + cleaner) 1 sponge per 50 sq ft + 0.010 L/sq ft cleaner 1.5 4 1.5
    Hazmat abatement (Level C, standard exit protocol) N/A (wetting agent: 0.003 L/sq ft ACM) 3.0 (full replacement each exit) 6 2 pairs OV/P100
    Biohazard Level C 0.025 L/sq ft × 2 applications 3.0 (full replacement each exit) 6 2 pairs OV/P100
    Biohazard Level B (decomposition) 0.025 L/sq ft × 2 applications 3.0 Level B full-suit (replace each exit) 6 Supplied air — 0 disposable

    Data Point 6 — Containment Materials

    Proxy method: Standard containment for a single affected room (standard ceiling height 8–10 ft): perimeter of affected area (linear feet) × ceiling height × 1.2 (overlap factor) = m² of poly sheeting. For compartmentalized commercial spaces, add 20 m² per additional doorway or penetration point.

    Zipper doors: 1 per entry/exit point, typically 2 per contained area (entry + equipment pass-through).

    Data Points 7–8 — Waste Volume and Disposal

    Volume proxy: Use weight estimation proxies from the RCP Emission Factor Reference Table (drywall at 2.5 lbs/sq ft, carpet at 3.0 lbs/sq ft, etc.) applied to the demolished area documented in job scope records.

    Disposal method proxy: If disposal facility type is unknown, apply default based on material type: standard C&D for non-contaminated demolition debris, regulated C&D or hazmat for contaminated materials (see Table 3 in the Emission Factor Reference).

    Data Points 9–10 — Demolished and Installed Materials

    Proxy method: Calculate from demolition scope records (affected area by room, material type documented in scope of work or Xactimate/Symbility estimate). Weight estimation proxies apply as above. For installed materials in reconstruction phase, use square footage from scope-of-work documentation and apply standard weight proxies.

    Documenting Proxy Use in Your RCP Report

    Every proxy estimate must be documented in the data quality section of the per-job carbon report. The format for documenting a proxy is: [Data point name]: [Tier 2 or 3 estimate]. [Brief description of proxy method]. [Source of proxy rate or assumption].

    Example: “Vehicle mileage: Tier 2 estimate. Round-trip distance calculated using Google Maps from company facility to job site address (44 miles RT × 4 crew trips). Crew trip count from job invoices. Source: RCP proxy method P-4-1.”

    Example: “PPE consumption: Tier 2 estimate. Cat 3 water damage standard consumption rate applied (2.0 Tyvek/tech/day, 5 glove pairs/tech/day) per RCP Table A-5. Actual PPE not tracked separately on this job.”

    Can a per-job carbon report with all Tier 2 estimates be used in GRESB reporting?

    Yes. GRESB accepts primary data at various quality levels, including documented estimates. A Tier 2 estimate is primary data (not spend-based estimation) and is acceptable. The data quality notation in the RCP report demonstrates that you have applied documented methodology rather than guessing, which is what auditors need to see.

    What is the margin of error typical for Tier 2 proxy estimates?

    Typical uncertainty range for Tier 2 RCP estimates is ±20–35% relative to primary measured data. This compares favorably to spend-based estimation (Tier 3), which typically has ±50–100% uncertainty for restoration work due to the high variability of job type, scope, and emission profile at equivalent invoice amounts.

    Should you disclose the uncertainty range in the per-job carbon report?

    The RCP does not require quantified uncertainty ranges in the per-job report, but noting that Tier 2 estimates were used in the data quality section effectively communicates to auditors that the figure carries inherent estimation uncertainty. For clients whose ESG consultants or auditors specifically request uncertainty ranges, use the guidance values above (±20–35% for Tier 2).


  • RCP Emission Factor Reference Table: All Values in One Place

    RCP Emission Factor Reference Table: All Values in One Place

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    This reference table consolidates all emission factors used in Restoration Carbon Protocol calculations. It is the lookup document you use when completing a per-job carbon report — every factor needed for Categories 1, 4, 5, and 12 across all five job types is in this table, with source citations for audit purposes.

    Version: RCP v1.0 | Factor vintage: EPA 2024, DEFRA 2024, EPA WARM v16 | Units: All values in kg CO2e unless noted as tCO2e

    Table 1: Category 4 — Vehicle Transportation

    Vehicle Type Fuel kg CO2e per mile Source
    Passenger car Gasoline 0.355 EPA Table 2, Mobile Combustion 2024
    Light-duty truck / work van (under 8,500 lbs GVWR) Gasoline 0.503 EPA Table 2, Mobile Combustion 2024
    Light-duty truck / cargo van Diesel 0.523 EPA Table 2, Mobile Combustion 2024
    Medium-duty truck / equipment trailer (8,500–26,000 lbs GVWR) Diesel 1.084 EPA Table 2, Mobile Combustion 2024
    Heavy-duty truck — unloaded (26,000+ lbs GVWR) Diesel 1.612 EPA Table 2, Mobile Combustion 2024
    Heavy-duty truck — loaded (waste hauling, C&D) Diesel 2.25 EPA Table 2 + load factor adjustment
    Licensed hazmat waste hauler (ACM, lead, general hazmat) Diesel 3.20 EPA Table 2 + hazmat vehicle premium
    Licensed hazmat hauler (PCB, high-hazard specialty) Diesel 3.80 EPA Table 2 + specialty vehicle premium
    Medical waste hauler (biohazard) Diesel 2.80 EPA Table 2 + medical waste vehicle
    Pack-out truck (contents restoration) — loaded Diesel 2.25 EPA Table 2 + load factor
    Pack-out truck — empty (return trip) Diesel 1.612 EPA Table 2 — unloaded heavy

    Table 2: Category 1 — Materials

    Chemical Treatments

    Material Unit kg CO2e per unit Source
    Quaternary ammonium antimicrobial / biocide (liquid) Liter 2.8 EPA EEIO — Chemical manufacturing sector
    Hydrogen peroxide-based antimicrobial/biocide Liter 1.9 EPA EEIO — Chemical manufacturing sector
    Borax-based mold treatment kg 1.1 EPA EEIO — Inorganic chemical manufacturing
    Hospital-grade disinfectant (EPA-registered) Liter 2.8 EPA EEIO — Chemical manufacturing sector
    Enzyme biological digester / deodorizer Liter 1.6 EPA EEIO — Specialty chemical manufacturing
    Encapsulant / smoke-blocking primer Gallon 4.2 EPA EEIO — Paint and coatings manufacturing
    Thermal fogging agent Liter 2.1 EPA EEIO — Chemical manufacturing sector
    Desiccant drying agent (silica gel) kg 1.4 EPA EEIO — Chemical manufacturing sector
    Wetting agent / amended water (surfactant for ACM) Liter 1.4 EPA EEIO — Chemical manufacturing sector
    Dry ice (CO2 pellets for blast cleaning) kg 0.85 EPA EEIO — Industrial gas manufacturing

    Personal Protective Equipment

    PPE Item Unit kg CO2e per unit Source
    Disposable Tyvek suit (Level C) Each 1.2 EPA EEIO — Apparel manufacturing
    Level B full encapsulating suit Each 3.0 EPA EEIO — Apparel/specialty manufacturing
    Level C PPE full kit (Tyvek + gloves + goggles + boot covers) Kit 1.8 Composite of individual items
    Level B PPE full kit (encapsulating suit + supplied air + gloves) Kit 4.2 Composite of individual items
    Nitrile gloves (pair) Pair 0.3 EPA EEIO — Rubber and plastics manufacturing
    N95 respirator (disposable) Each 0.4 EPA EEIO — Medical equipment manufacturing
    Half-face respirator, P100 cartridges (pair) Pair 0.8 EPA EEIO — Medical equipment manufacturing
    Full-face respirator cartridges (pair) Pair 1.2 EPA EEIO — Medical equipment manufacturing
    Boot covers (pair) Pair 0.15 EPA EEIO — Rubber and plastics

    Containment and Filtration

    Material Unit kg CO2e per unit Source
    6-mil polyethylene sheeting 0.55 EPA EEIO — Plastics product manufacturing
    4-mil polyethylene sheeting 0.37 EPA EEIO — Plastics product manufacturing
    Double-layer 6-mil containment (hazmat/biohazard) 1.10 2× single-layer factor
    Zipper door — disposable Each 1.8 EPA EEIO — Plastics/hardware
    Zipper door — reusable (amortized over 20 uses) Use 0.09 1.8 ÷ 20 uses
    HEPA filter — air scrubber (standard) Each 3.2 EPA EEIO — Industrial machinery manufacturing
    HEPA vacuum bag (commercial grade) Each 0.4 EPA EEIO — Paper/plastics manufacturing
    Biohazard bag — 33-gallon red (medical waste) Each 0.65 EPA EEIO — Medical plastics manufacturing
    ACM disposal bag — 6-mil labeled (33-gallon) Each 0.55 EPA EEIO — Plastics product manufacturing
    Sharps disposal container (1-gallon) Each 0.35 EPA EEIO — Plastics/medical equipment
    Glove bag (pipe insulation removal) Each 0.85 EPA EEIO — Plastics product manufacturing

    Table 3: Category 5 — Waste Disposal

    Waste Type Disposal Method tCO2e per ton Source
    Standard C&D debris (non-hazardous mixed) Landfill 0.16 EPA WARM v16
    Cat 2 water-contaminated porous materials Standard landfill 0.18 EPA WARM + contamination premium
    Cat 3 sewage-contaminated materials Regulated C&D landfill 0.22 EPA WARM + regulated disposal
    Smoke-contaminated C&D debris (standard) Standard landfill 0.16 EPA WARM v16
    Smoke-contaminated C&D (regulated facility) Licensed C&D landfill 0.20 EPA WARM + transport premium
    Mold-contaminated porous materials Standard landfill (most jurisdictions) 0.18 EPA WARM + contamination premium
    Friable ACM (pipe insulation, spray fireproofing) Licensed hazmat landfill 0.42 EPA WARM + licensed facility + transport
    Non-friable ACM (floor tiles, roofing, joint compound) Licensed C&D with ACM cell 0.28 EPA WARM + regulated C&D transport
    Lead paint debris (TCLP-classified hazardous) Licensed hazmat landfill 0.38 EPA WARM + hazmat transport
    PCB-containing materials ≥50 ppm Licensed PCB incineration 1.85 EPA hazardous waste incineration factors
    PCB-containing materials <50 ppm Licensed landfill 0.22 EPA WARM + transport premium
    Mercury-containing lamps/thermostats Mercury recycler 0.15 EPA WARM — recycling credit offset
    Regulated medical/biohazard waste (standard) Autoclave + licensed landfill 0.55 EPA medical waste treatment factors
    High-pathogen biohazard waste High-temperature incineration 0.85 EPA hazardous waste incineration factors
    Sharps waste Sharps autoclave or incineration 0.65 EPA medical waste — sharps category
    Contaminated water (Cat 3, to wastewater treatment) Municipal wastewater treatment 0.000272 per liter EPA WARM v16 — wastewater treatment
    Disposable PPE — standard Standard landfill 0.25 EPA WARM — mixed plastics
    Disposable PPE — hazmat-contaminated Licensed hazmat or medical waste landfill 0.30–0.55 Apply appropriate hazmat or medical waste factor

    Table 4: Category 12 — Demolished Building Materials

    Material tCO2e per ton (landfill) tCO2e per ton (recycled) Source
    Gypsum drywall (1/2″) 0.16 0.02 EPA WARM v16
    Dimensional lumber / wood framing -0.07 -0.15 EPA WARM v16 — carbon storage credit
    OSB sheathing -0.05 -0.12 EPA WARM v16 — carbon storage credit
    Carpet + pad (standard residential/commercial) 0.33 0.05 EPA WARM v16
    Hardwood flooring -0.12 -0.18 EPA WARM v16 — carbon storage credit
    Vinyl / LVP flooring 0.28 0.08 EPA WARM v16 — plastics category
    Ceramic / porcelain tile 0.04 0.01 EPA WARM v16 — inert material
    Fiberglass batt insulation 0.33 0.05 EPA WARM v16
    Cellulose insulation (spray or loose-fill) 0.06 -0.02 EPA WARM v16
    Spray polyurethane foam insulation (SPF) 0.72 N/A EPA WARM v16 — plastics category
    Acoustic ceiling tiles (standard) 0.12 0.03 EPA WARM v16 — ceiling tile category
    Structural steel (demolished) -0.85 -0.95 EPA WARM v16 — steel recycling credit
    Copper pipe / wiring -0.45 -0.60 EPA WARM v16 — copper recycling credit
    Aluminum (ductwork, framing) -1.20 -1.45 EPA WARM v16 — aluminum recycling credit (high value)

    Weight Estimation Proxies

    When disposal receipts are not available, use these weight proxies to estimate demolished material tonnage:

    Material Weight per sq ft (installed, dry) Notes
    1/2″ gypsum drywall 2.5 lbs Use dry weight, not post-water-damage wet weight
    5/8″ gypsum drywall (Type X) 3.1 lbs Common in commercial construction
    Carpet + pad (residential) 3.0 lbs Including pad and tack strips
    Carpet + pad (commercial, glue-down) 2.2 lbs Heavier carpet, no pad
    LVP / vinyl plank flooring 2.8 lbs Including underlayment
    Ceramic tile (floor, 3/8″) 4.5 lbs Including thin-set mortar
    Acoustic ceiling tiles (2’×2′ standard) 1.8 lbs Mineral fiber type
    Fiberglass batt insulation (3.5″ R-13) 0.5 lbs Per sq ft of coverage area
    Dimensional lumber 2×4 wall framing (per linear foot of wall) 4.0 lbs Assumes 16″ OC framing in 8-ft walls
    Non-friable ACM floor tile (9″×9″) 4.0 lbs Including mastic adhesive

    How often will this reference table be updated?

    The RCP emission factor reference table will be updated annually following the release of updated EPA WARM, EPA Mobile Combustion, and DEFRA databases. Version numbers are included in the table header — always cite the version used in your per-job carbon report data quality notes.

    What if I need an emission factor for a material not in this table?

    First check EPA WARM v16 directly (available free at epa.gov/warm). Second, check the EPA EEIO database for the relevant industry sector. Third, check DEFRA’s Conversion Factors for Company Reporting. If none of these sources contain the specific material, use the closest proxy category and document the substitution in your data quality notes.

    Are these factors suitable for use in EU CSRD reporting?

    EPA and EPA WARM factors are US-specific but are accepted in most international ESG frameworks when accompanied by clear source citation. For EU CSRD reporting specifically, DEFRA factors (UK) or OECD emission factors may be preferred by auditors for non-US operations. The RCP will publish a DEFRA-specific factor table in a future supplement for EU-applicable reporting contexts.


    Table 6: Refrigerant GWP Values — IPCC AR6 Update

    The Global Warming Potential values for refrigerants used in restoration drying equipment have been updated under IPCC Sixth Assessment Report (AR6, 2021). AR6 GWP-100 values are 14–18% higher than AR5 for the HFCs commonly found in LGR dehumidifiers. RCP v1.0 uses AR6 values for refrigerant-related calculations. The EPA AIM Act continues to use AR4 values for regulatory compliance; UNFCCC/Paris reporting uses AR5. When delivering data to clients, disclose which GWP vintage was used.

    Refrigerant Common use in restoration AR5 GWP-100 AR6 GWP-100 Change
    R-410A (HFC-32/125 blend) Most current LGR dehumidifiers ~1,924 ~2,256 +17.3%
    R-32 (HFC-32) Dri-Eaz LGR 6000i; newer units 677 771 +13.9%
    R-454B (HFC-32/HFO-1234yf blend) Next-gen low-GWP units ~467 ~530 +13.5%
    HFC-134a (R-134a) Older residential dehumidifiers 1,300 1,530 +17.7%

    Source: IPCC AR6 WG1, Chapter 7, Table 7.SM.7 (2021). EPA Technology Transitions GWP Reference Table.


    Table 7: EPA eGRID 2023 — Subregional Emission Factors for Major Restoration Markets

    The national average grid factor (0.3497 kg CO₂e/kWh, eGRID 2023) used as the RCP default understates or overstates electricity emissions significantly depending on where equipment is operated. Using location-specific subregion factors improves data quality for clients in GRESB, SBTi, and CSRD reporting contexts.

    Use the subregion factor for the state/metro where the job was performed, not where the contractor’s facility is located.

    eGRID Subregion Primary coverage kg CO₂e/kWh vs. RCP default (0.3499)
    NYUP Upstate New York 0.1101 -68.5%
    CAMX California / Western US 0.1950 -44.3%
    NEWE New England 0.2464 -29.6%
    ERCT Texas (ERCOT) 0.3341 -4.5%
    US Average National default (RCP v1.0) 0.3497 Baseline
    FRCC Florida 0.3560 +1.7%
    SRSO Southeast (excluding FL) 0.3837 +9.7%
    NYCW NYC and Westchester 0.3927 +12.2%

    Source: EPA eGRID2023 Summary Tables Rev 2 (published March 2025). Full subregion table available at epa.gov/egrid. A California restoration contractor using the national average overstates electricity emissions by 44%; a Florida contractor understates by 1.7%. The difference is largest for multi-week jobs with sustained equipment energy consumption.


    Table 8: PPE and Consumables — LCA-Sourced Per-Unit Emission Factors

    The EPA EEIO proxies in Table 2 are sector-level estimates. The following values are sourced from published lifecycle assessments and Environmental Product Declarations for specific product types. Use these in place of the EEIO values where the product type matches.

    Item Unit kg CO₂e Source vs. EEIO proxy
    Nitrile glove (3.5g, size M) Each 0.0277 Top Glove LCA 2024, SATRA-verified -82% vs. EEIO pair proxy
    Nitrile glove pair Pair 0.0554 Top Glove LCA 2024 -82% vs. current 0.3 EEIO
    N95 respirator (disposable) Each 0.05 Springer Env. Chem. Letters 2022 -88% vs. current 0.4 EEIO
    DuPont Tyvek 400 coverall (180g HDPE) Each 0.40–0.63 Estimated: 180g × 2.2–3.5 kg CO₂e/kg HDPE -47–65% vs. current 1.2 EEIO
    LVP/LVT flooring (Shaw EcoWorx) 5.2 Shaw Contract EcoWorx Resilient EPD 2023 Consistent with WARM v16 plastics
    Ceramic tile (standard) kg 0.78 ICE Database v3.0 (University of Bath) More granular than WARM v16 inert
    Ready-mix concrete (30 MPa) kg 0.13 ICE Database v3.0 132 kg CO₂e/m³
    Polyethylene LDPE sheeting kg 1.793 DEFRA 2024 (closed-loop recycling scenario) Use as proxy for virgin LDPE sheeting
    H₂O₂ antimicrobial (active ingredient) kg active 1.33 ACS Omega 2025 (anthraquinone process) Lower than EEIO chemical proxy

    Note on Tyvek: DuPont has not published an independent lifecycle assessment for standard Tyvek 400 coveralls. The value above is estimated from HDPE production emission factors. DuPont has commissioned an LCA for Tyvek 500 Xpert BioCircle (a recycled-content variant) claiming 58% reduction versus standard Tyvek, which implies a quantified baseline exists internally. The RCP will update this value if DuPont publishes the underlying LCA data.

    Note on nylon carpet (DEFRA 2024): The DEFRA 2024 value of 5.40 kg CO₂e/kg for nylon carpet should be verified against the actual DEFRA 2024 full spreadsheet to confirm whether this represents virgin nylon production or a closed-loop recycling scenario. DEFRA 2024 uses AR5 GWP values throughout.


    Factor Vintage and GWP Basis: Version Disclosure

    RCP v1.0 uses the following factor vintages:

    • Electricity: EPA eGRID 2023 (published March 2025)
    • Mobile combustion / vehicle fuels: EPA 2025 Emission Factors Hub
    • Waste disposal: EPA WARM v16
    • Refrigerant GWPs: IPCC AR6 (2021)
    • Materials (non-EEIO): ICE Database v3.0, EPD-sourced, DEFRA 2024
    • Materials (EEIO proxy): EPA USEEIO v2.0
    • GWP basis: AR6 GWP-100 for refrigerants; AR5 GWP-100 for all other gases (consistent with EPA GHG Inventory basis)

    When factors are updated in patch releases, the factor vintage table updates accordingly. All RCP Job Carbon Reports should reference the schema_version field (RCP-JCR-1.0) which implicitly references the factor table version used at calculation time. For year-over-year comparisons, use the same factor vintage across both years unless a major correction justifies restating prior-year figures.


  • Biohazard and Trauma Scene Cleanup: Scope 3 Emissions Mapping and Calculation Guide

    Biohazard and Trauma Scene Cleanup: Scope 3 Emissions Mapping and Calculation Guide

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    Biohazard and trauma scene cleanup is the fifth core restoration job type covered under the Restoration Carbon Protocol. Its Scope 3 emissions profile is distinct from the other four categories in one critical way: virtually all waste generated is classified as regulated medical or biohazardous waste, triggering disposal emission factors that are 3–5× higher than standard C&D waste. Combined with intensive PPE requirements and specialized treatment chemicals, biohazard cleanup generates significant emissions from a relatively small affected area.

    Job Classification

    Job Type Primary Waste Classification Dominant Emission Category Typical Range per Scene
    Unattended death / decomposition Regulated medical waste + affected porous materials Cat 5 (biohazard disposal) + Cat 12 (demolished materials) 0.8–3.0 tCO2e
    Trauma scene (blood/bodily fluids, limited area) Regulated medical waste, minimal structure affected Cat 5 dominant 0.3–1.2 tCO2e
    Crime scene with structural damage Regulated medical waste + C&D debris Cat 5 + Cat 12 1.0–4.0 tCO2e
    Sharps/drug paraphernalia scenes Sharps waste (regulated) + affected surfaces Cat 5 (sharps disposal) dominant 0.4–1.5 tCO2e
    Hoarding remediation with biohazard component Mixed solid waste + biohazard materials Cat 4 (volume transport) + Cat 5 1.5–6.0 tCO2e

    Category 4: Transportation

    Vehicle Type kg CO2e per mile Use
    Biohazard response vehicle (dedicated, sealed) 0.503–1.084 Crew and initial materials transport (van or truck)
    Medical waste hauler (regulated) 2.80 Regulated biohazardous waste to licensed medical waste facility
    Dump truck (standard C&D, non-biohazard portion) 2.25 loaded Non-regulated demolition debris for hoarding jobs

    Medical waste facility distance: Licensed medical waste treatment facilities (autoclaves, incinerators) are less common than standard landfills. Average distance from job site to licensed biohazard disposal facility is 40–80 miles in most US markets. Use actual manifest distances; apply 60 miles as default where manifests are unavailable.

    Category 1: Materials

    Material Unit kg CO2e per unit Notes
    Hospital-grade disinfectant (quaternary ammonium, EPA-registered) Liter 2.8 EPA EEIO — chemical manufacturing
    Enzyme treatment / biological digester Liter 1.6 EPA EEIO — specialty chemical
    Ozone generator treatment (odor/pathogen) Day-unit 0.35 Equipment embodied carbon amortized
    Hydroxyl generator treatment Day-unit 0.40 Equipment embodied carbon amortized
    Level B PPE full kit (Tyvek + face shield + supplied air) Kit 4.2 Required for decomposition / unattended death
    Level C PPE kit (Tyvek + half-face P100/OV) Kit 1.8 Trauma scenes with active biohazard
    6-mil poly sheeting (containment + floor protection) 0.55 EPA EEIO — plastics manufacturing
    Biohazard bags (red, 33-gallon) Each 0.65 Medical-grade polyethylene, red-colored
    Sharps disposal container (1-gallon) Each 0.35 EPA EEIO — plastics/medical equipment

    Category 5: Waste — Biohazard Disposal

    Waste Type Disposal Method tCO2e per ton Source
    Regulated medical waste (soft tissue, bodily fluids, porous materials) Autoclave + landfill 0.55 EPA medical waste incineration / autoclave factors
    Regulated medical waste — high pathogen risk High-temperature incineration 0.85 EPA hazardous waste incineration factors
    Sharps waste (needles, glass) Sharps autoclave or incineration 0.65 EPA medical waste — sharps category
    Contaminated porous building materials (drywall, carpet, subfloor) Licensed medical waste landfill or standard landfill (jurisdiction-dependent) 0.38–0.55 Apply higher factor when facility requires medical waste classification
    Non-biohazard C&D debris (hoarding, structural) Standard landfill 0.16 EPA WARM v16 — standard C&D
    Spent PPE (biohazard-contaminated) Licensed medical waste facility 0.55 Same as regulated medical waste stream

    Jurisdiction note on porous material classification: Whether mold-contaminated porous building materials from biohazard scenes must be disposed of as regulated medical waste (vs. standard C&D waste) varies by state and local regulation. Check with your licensed waste hauler for the applicable classification in your jurisdiction. Apply the higher emission factor (0.55) in conservative calculations or when disposal classification is uncertain.

    Category 12: Demolished Building Materials

    Biohazard scenes frequently require demolition of affected porous materials — flooring, subfloor, drywall — that absorbed biological contamination and cannot be cleaned to restoration standards. When these materials are classified as regulated medical waste at removal, their disposal emissions are captured in Category 5 (same as ACM materials in hazmat abatement). When they are classified as standard C&D waste at the jurisdiction level, use Category 12 EPA WARM factors (same as water damage demolition materials).

    Apply Category 12 factors to demolished materials only when they flow to standard C&D landfill rather than medical waste disposal. When in doubt, apply medical waste disposal factors and capture in Category 5.

    Worked Example: Unattended Death, Single Apartment Unit

    Job profile: Unattended death in a 650 sq ft apartment, discovered after 10 days. Affected area: 400 sq ft (bedroom and hallway). Scope: removal of all porous materials in affected area (carpet, subfloor, drywall to 24″ height), disinfection of all surfaces, odor treatment. Duration: 2 days. Crew: 2 technicians in Level B PPE. Facility: 15 miles from job site. Licensed medical waste facility: 58 miles from job site.

    Category 4 — Transportation

    Crew vehicle: 1 van × 30 mi RT × 3 trips = 90 mi × 0.503 = 45 kg
    Medical waste hauler: 1 × 116 mi RT × 2.80 = 325 kg
    Category 4 total: 370 kg = 0.37 tCO2e

    Category 1 — Materials

    Hospital-grade disinfectant (400 sq ft × 0.025 L/sq ft × 2 applications): 20 L × 2.8 = 56 kg
    Enzyme treatment: 8 L × 1.6 = 13 kg
    Ozone generator: 2 day-units × 0.40 = 1 kg
    Level B PPE (2 workers × 2 days × 3 exits/day = 12 kit replacements): 12 × 4.2 = 50 kg
    Biohazard bags (20 bags): 20 × 0.65 = 13 kg
    Poly sheeting (floor protection + containment): 80 m² × 0.55 = 44 kg
    Category 1 total: 177 kg = 0.18 tCO2e

    Category 5 — Waste

    Regulated medical waste (soft materials, porous materials, PPE): estimated 0.6 tons × 0.55 = 0.33 tCO2e
    Non-hazard debris (drywall, not in medical waste stream): 0.25 tons × 0.16 = 0.04 tCO2e
    Category 5 total: 0.37 tCO2e

    Category 12

    Carpet/pad (400 sq ft): 0.55 tons × 0.33 = 0.18 tCO2e
    Subfloor (400 sq ft plywood): 0.40 tons × -0.05 = -0.02 tCO2e
    Category 12 total: 0.16 tCO2e

    Category tCO2e
    Category 4 — Transportation 0.37
    Category 1 — Materials 0.18
    Category 5 — Waste (regulated medical) 0.37
    Category 12 — Demolished materials 0.16
    Total 1.08 tCO2e

    Is biohazard cleanup typically covered by commercial property insurance?

    Yes — biohazard cleanup at commercial properties is typically covered under property insurance. The emissions data from an RCP biohazard calculation should be provided to the commercial property manager for their Scope 3 inventory in the same format as other restoration job types.

    How do you handle hoarding remediation with both biohazard and standard C&D waste streams?

    Split the waste into its classified streams: regulated biohazardous material (apply medical waste disposal factors), standard C&D debris (apply WARM factors), and any hazardous materials encountered (apply hazmat factors). Document each stream separately in the Category 5 breakdown. The mixed nature of hoarding jobs makes them the most complex biohazard calculation scenario.

    Does the RCP apply to crime scenes where law enforcement is involved?

    Yes. The RCP calculation is based on the remediation contractor’s scope of work regardless of the cause of the biohazard condition. The emissions calculation is performed after the scene is released to the contractor and is based on the actual materials used, waste generated, and transportation involved in the cleanup — independent of the legal context of the event.


    Disposal Method Differentiation: Autoclave vs. Incineration Creates a 5–10× Emission Difference

    The biohazard guide currently uses a single disposal factor of 0.88 tCO₂e per short ton for all regulated medical/biohazardous waste. This figure is methodologically sound as a default, but the actual emission factor depends entirely on which treatment pathway your waste hauler uses. The difference is not marginal — it is 5 to 10 times.

    The following lifecycle emission data comes from a peer-reviewed GHG Comparison Assessment conducted by Carbon Action Consultants (2022, reviewed by Dr. Tahsin Choudhury) commissioned by Envetec, covering 72 metric tonnes of biohazardous waste across treatment pathways:

    Treatment Pathway tCO₂e per metric tonne vs. Direct Incineration
    Onsite disinfection and shredding (where permitted) 0.057 93% lower
    Autoclave → standard landfill (no incineration) 0.46 44% lower
    Direct high-temperature incineration → landfill 0.82 Baseline
    Autoclave → incineration → landfill (dual treatment) 0.90 +10% above direct incineration

    Source: Envetec GHG Comparison Assessment, 2022. Validation: UK NHS hospital waste study (Journal of Cleaner Production, 2020) measured high-temperature incineration at 1,074 kg CO₂e per tonne (0.97 tCO₂e/short ton), consistent with the incineration-pathway figure above.

    The current RCP default of 0.88 tCO₂e/short ton (equivalent to approximately 0.97 tCO₂e/metric tonne) reflects the dual-treatment or incineration-dominant pathway. It is a conservative and defensible default. However, for contractors whose waste haulers use autoclave-only treatment, the actual figure may be nearly half the default.

    How to document: Ask your regulated waste hauler which treatment method they use. Record the answer in the data_quality.notes field of your RCP Job Carbon Report. If the hauler uses autoclave-only, apply 0.46 tCO₂e/metric tonne (0.42 tCO₂e/short ton) and flag it as hauler-confirmed primary data. If unknown, apply the default 0.88 tCO₂e/short ton and flag as proxy.


    Autoclave Energy Intensity

    For contractors or facilities operating onsite autoclave treatment, the energy intensity data is available from peer-reviewed hospital operations research. A study published in PubMed (PMID 27075773), tracking 304 days and 2,173 autoclave cycles, measured:

    • Energy intensity: 1.9 kWh per kg of waste sterilized
    • Water consumption: 58 liters per kg of waste

    At the national grid emission factor (0.3499 kg CO₂e/kWh), autoclave treatment of one short ton (907 kg) of biohazardous waste consumes approximately 1,723 kWh of electricity, generating 603 kg CO₂e from energy alone — consistent with the peer-reviewed lifecycle figure of 0.46 tCO₂e/tonne when hauling and residual landfill are included.


    Odor Neutralization Chemistry: What Has Emission Data and What Doesn’t

    Trauma and biohazard cleanup frequently involves odor neutralization as a final step after biological contamination is removed. The emission factors for these chemicals are poorly documented.

    Peracetic acid (PAA) is the best-documented odor treatment and disinfectant in restoration applications. The Envetec lifecycle study assigns 0.61 kg CO₂e per kg of PAA active ingredient, making it one of the lower-footprint chemical treatments available. PAA breaks down rapidly to acetic acid and water — no persistent residue, no downstream emission concerns.

    Chlorine dioxide (ClO₂) is the dominant chemistry for trauma scene odor elimination. Products using sodium chlorite activated with citric acid (Biocide Systems Room Shocker, ProKure1) are self-generating chemistry requiring no electricity for treatment delivery. No published production emission factor exists for ClO₂ generator products specifically. The RCP treats ClO₂ odor treatment as a data gap. Apply the EPA EEIO chemical manufacturing proxy (2.8 kg CO₂e/kg of active chemical) and flag as estimated.

    Enzyme-based neutralizers similarly lack published LCA data. Treat as a data gap and apply the EEIO proxy.


    ATP Testing: Emissions-Negligible but Methodologically Required

    ATP bioluminescence testing (ANSI/IICRC S540 requires minimum two rounds per scene — pre-remediation and clearance) is a consumable source. Hygiena UltraSnap ATP swabs weigh approximately 5–10g each (polypropylene housing, pre-moistened fiber tip, luciferin/luciferase reagent). Estimated carbon footprint: 20–50g CO₂e per swab using generic small medical plastic device lifecycle data. A typical trauma scene requiring 10–30 swabs generates 0.2–1.5 kg CO₂e from ATP testing.

    This is below 0.1% of total job emissions on all but the smallest trauma scene jobs. ATP testing is documented here for methodological completeness — include it in Category 1 if your job tracking captures swab consumption, but it is acceptable to omit and note the exclusion as immaterial in the data_quality section.


    Sources and References — Biohazard Technical Additions

    • Envetec / Carbon Action Consultants. GHG Comparison Assessment for Biohazardous Waste Treatment Pathways. 2022. envetec.com
    • PubMed PMID 27075773. “Steam sterilisation’s energy and water footprint.” Journal of Hospital Infection. 2016.
    • Springer Environmental Chemistry Letters. “Impact of waste of COVID-19 protective equipment on the environment.” 2022.
    • Top Glove. Life Cycle Assessment Results for Nitrile Gloves. SATRA-verified. 2024.
    • ANSI/IICRC S540. Standard for Professional Biohazard Remediation. Current edition.