Tag: Persistent Context

  • BigQuery Knowledge Ledger — Persistent AI Memory for Content Operations

    What Is a BigQuery Knowledge Ledger?
    A BigQuery Knowledge Ledger is a persistent AI memory layer — your content, decisions, SOPs, and operational history stored as vector embeddings in Google BigQuery, queryable in real time. When a Claude session opens, you query the ledger instead of re-pasting context. Your AI starts informed, not blank.

    Every Claude session starts from zero. You re-brief it on your clients, your sites, your decisions, your rules. Then the session ends and it forgets. For casual use, that’s fine. For an operation running 27 WordPress sites, 500+ published articles, and dozens of active decisions — that reset is an expensive tax on every session.

    The BigQuery Knowledge Ledger is the solution we built for ourselves. It stores operational knowledge as vector embeddings — 925 content chunks across 8 tables in our production ledger — and makes it queryable from any Claude session. The AI doesn’t start blank. It starts with history.

    Who This Is For

    Agency operators, publishers, and AI-native teams running multi-site content operations where the cost of re-briefing AI across sessions is measurable. If you’ve ever said “as I mentioned before” to Claude, you need this.

    What We Build

    • BigQuery datasetoperations_ledger schema with 8 tables: knowledge pages, embedded chunks, session history, client records, decision log, content index, site registry, and change log
    • Embedding pipeline — Vertex AI text-embedding-005 model processes your existing content (Notion pages, SOPs, articles) into vector chunks stored in BigQuery
    • Query interface — Simple Python function (or Cloud Run endpoint) that accepts a natural language query and returns the most relevant chunks for context injection
    • Claude integration guide — How to query the ledger at session start and inject results into your Claude context window
    • Initial seed — We process your existing Notion pages, key SOPs, and site documentation into the ledger on setup

    What We Deliver

    Item Included
    BigQuery dataset + 8-table schema deployed to your GCP project
    Vertex AI embedding pipeline (text-embedding-005)
    Query function (Python + optional Cloud Run endpoint)
    Initial content seed (up to 100 Notion pages or documents)
    Claude session integration guide
    Ongoing ingestion script (add new content to ledger)
    Technical walkthrough + handoff documentation

    Stop Re-Briefing Your AI Every Session

    Tell us how many sites, documents, or SOPs you’re managing and what your current re-briefing tax looks like. We’ll scope the ledger build.

    will@tygartmedia.com

    Email only. No sales call required.

    Frequently Asked Questions

    Does this require Google Cloud?

    Yes. BigQuery and Vertex AI are Google Cloud services. You need a GCP project with billing enabled. We handle all setup and deployment.

    What’s the ongoing cost in GCP?

    BigQuery storage for a 1,000-chunk ledger costs less than $1/month. Embedding runs (adding new content) cost fractions of a cent per chunk via Vertex AI. Query costs are negligible at typical session volumes.

    Can this work with tools other than Claude?

    Yes. The ledger is model-agnostic — it returns text chunks that can be injected into any LLM context. ChatGPT, Gemini, and Perplexity integrations all work with the same query interface.

    What format does my existing content need to be in?

    Notion pages (via API), plain text, markdown, or Google Docs. We handle the conversion and chunking during initial seed. PDFs and Word docs require an additional preprocessing step.

    Last updated: April 2026