Category: Uncategorized

  • Workers for Agents in TypeScript: Patterns That Hold Up in Production

    Workers for Agents in TypeScript: Patterns That Hold Up in Production

    The 60-second version

    Workers reward a specific style of TypeScript: small, single-purpose, structured-input-and-output, well-typed. The constraints (30 seconds, 128MB, no state) push you toward this style automatically. Workers that hold up in production share patterns: typed input/output schemas, defensive HTTP calls with timeouts, structured error returns, no hidden side effects.

    Five production patterns

    1. Type your input and output.
    Type strictly. The agent works against the schema. Schema drift breaks the agent silently.
    2. Defensive HTTP with timeouts.
    External API calls inside a 30-second budget need their own timeouts. A 25-second API call leaves 5 seconds for everything else. Set explicit fetch timeouts shorter than the Worker timeout.
    3. Structured error returns instead of throws.
    Throw inside a Worker and the agent gets opaque failure. Return structured error objects and the agent can reason about the failure and respond gracefully.
    4. Idempotency where state matters.
    Workers have no persistent state, but they can hit external systems that do. If the external call is non-idempotent (e.g., creates a record), include an idempotency key derived from input. Calling the Worker twice should produce one record, not two.
    5. Approved domains as a deployment artifact.
    Track domain approvals in code. When a Worker stops working in production, “did the approved domains change” is the first thing to check.

    Three production failures to design around

    1. The 30-second wall. Aim for under 5 seconds typical, under 15 worst case. Long calls fail under retry loads.
    2. Silent domain blocks. A Worker calling a non-approved domain fails with an error that isn’t always obvious. Log every outbound destination.
    3. Memory leaks via large responses. Don’t pull a 50MB JSON response into a 128MB Worker. Stream, paginate, or pre-filter at the source.

    Testing strategy

    Unit-test the Worker logic separately from the agent. Use mock HTTP. Then integration-test with the actual agent calling the Worker. The two test layers catch different bugs.

    What to read next

    Workers + External APIs, Notion AI Meets MCP, Workers for Agents foundation piece, Security Posture.

  • Designing a Database Schema for AI Autofill That Stays Trustworthy

    Designing a Database Schema for AI Autofill That Stays Trustworthy

    The 60-second version

    Most database schemas were designed for humans typing things in. Autofill works differently — it processes one row at a time using row content and a prompt. Schemas designed for Autofill make the prompt’s job easier and the human’s job auditable. Controlled vocabularies. Source attribution. Fill-date stamps. Clear separation between human and agent fields. Get the schema right and Autofill is reliable. Get it wrong and you’ll fight Autofill forever.

    Schema design principles

    1. Controlled vocabularies over free text. A “category” field with five select options outperforms a free-text field. Autofill picks from a list reliably; it improvises inconsistently.
    2. Atomic fields over compound fields. “Customer info” as a single text field is bad for Autofill. Separate fields (name, industry, size, region) each get filled cleanly.
    3. Source attribution columns. Add a “filled by” select (Human / Basic Autofill / Custom Agent) and a “fill date.” The audit trail makes drift visible.
    4. Separate human and agent fields. Don’t let Autofill overwrite human-entered fields. Configure Autofill to only fill empty cells or only specific columns marked for agent use.
    5. Validation columns where stakes are high. A “verified by human” checkbox on agent-filled fields creates a gate where human review happens before the field is trusted downstream.

    Patterns for specific use cases

    Content library: title (human), URL (human), summary (Autofill), category (Autofill from controlled list), tags (Autofill from controlled list), filled-by (auto), fill-date (auto), verified (human checkbox).
    CRM: company name (human), industry (Autofill from list), size (Autofill from list), key contacts (Autofill extraction), notes (human), last interaction (formula from related database).
    Research database: source (human), key claim (Autofill summary), category (Autofill), related projects (Autofill relation), my take (human), filled-by (auto).

    Three schema mistakes

    1. Letting Autofill manage relation properties. Cross-row relationships are judgment calls. Autofill misses context. Keep relations human.
    2. No fill date. Without a date stamp, you can’t tell stale data. After 30 days, Autofill output may not reflect current page state.
    3. Mixing free text with structured fields. A free-text “notes” field next to an Autofill “summary” creates confusion about which is canonical.

    What to read next

    AI Autofill Databases foundation piece, Editorial Surface Area, Second-Brain Architecture, Trust Gap.

  • Notion AI vs Zapier AI: Which Automation Layer Wins For Your Use Case

    Notion AI vs Zapier AI: Which Automation Layer Wins For Your Use Case

    The 60-second version

    Zapier and Notion AI overlap in concept (automate routine work) but optimize for different operators. Zapier: massive integration catalog, no-code, simple triggers and actions, optimized for “if this, then that” patterns. Notion AI: AI reasoning native, deep workspace context, optimized for “decide what to do given context, then act.” Use Zapier for breadth of simple automations. Use Notion Agents for depth of reasoning. The two are complementary.

    When Zapier wins

    • You need many simple automations across many apps
    • Non-technical operators need to build automations themselves
    • The trigger logic is straightforward (if X, do Y)
    • You don’t have or want AI reasoning in the loop
    • You’re not heavily invested in Notion as a platform

    When Notion Agents win

    • The workflow requires understanding Notion workspace content
    • AI reasoning about whether and how to act matters
    • Schedule-driven autonomous work is the goal
    • The workflow output is in Notion or affects Notion data
    • You want agents that can compose multi-step reasoning

    What Zapier does that Notion Agents don’t

    • Thousands of app integrations out of the box
    • Visual no-code building accessible to non-developers
    • Flat-rate pricing easier to budget
    • Established for years; lots of recipes and patterns

    What Notion Agents do that Zapier doesn’t

    • AI reasoning native to the workflow
    • Workspace context understanding
    • Skills (natural-language workflow definitions)
    • Workers for custom code
    • Database fluency at the platform level

    The combined pattern

    Many operators use both:
    – Zapier for cross-app plumbing (lead from form → CRM → Slack → email)
    – Notion Agents for workspace reasoning (synthesize lead context, decide priority, draft response)
    – Sometimes Zapier triggers a Notion agent run
    Treat them as layers: Zapier moves data; Notion Agents make decisions about that data.

    Where this goes wrong

    1. Trying to use Zapier for AI reasoning. Zapier has AI features but they’re shallow compared to Notion Agents.
    2. Trying to use Notion Agents for cross-app plumbing. Possible via Workers/MCP, but Zapier’s integration catalog is broader.
    3. Picking based on price alone. The right tool for the job costs less than the wrong tool, even at higher per-task pricing.

    What to read next

    Notion Agents vs n8n Alone, n8n MCP Bridge, Workers + External APIs, AI-Native Company Patterns.

  • Building Your First Notion Skill: A Step-By-Step Walkthrough

    Building Your First Notion Skill: A Step-By-Step Walkthrough

    The 60-second version

    Building a skill that works on the first try is rare. Building a skill that works after three iterations is normal. The discipline is starting with a narrow scope, writing specific instructions, testing against real inputs, and tightening based on what fails. Most operators build skills that are too broad and too vague. The fix is the opposite of intuition — narrower, more specific, more bounded.

    Step-by-step

    Step 1 — Pick the right first skill. Not the most ambitious one. The most repetitive one. “Weekly digest from project database” is a great first skill. “Generate our entire content strategy” is a terrible first skill.
    Step 2 — Write the instructions. Specific format. Specific sections. Specific length. Specific tone. “Summarize” produces variance; “Produce a one-page summary with these five sections in this order, max two sentences per section, in active voice” produces consistency.
    Step 3 — Bound the context. Which database does it read? Which pages? Which fields? Pin tightly. Expand only when needed.
    Step 4 — Test five times. Run the skill against five different real inputs. Look at outputs side by side. The variance you see is the variance you’ll get in production.
    Step 5 — Tighten based on failures. What was wrong in any output? Update the instructions to prevent that. Re-test. Loop.
    Step 6 — Document the skill. Note what it does, when to call it, and what its known failure modes are.

    Three patterns that fail

    1. The mega-skill. A skill that “drafts the weekly report including stakeholder updates and exec summary and content calendar.” Break it into three skills.
    2. The vague skill. “Help me write.” Define what kind of help, what kind of writing, in what format.
    3. The unbounded skill. No context boundaries. The agent reads everything and produces something that sounds related to nothing.

    Where this goes wrong

    1. Skipping the five-test step. Skills that work once fail differently. Test variance early.
    2. Treating skills as static. Skills need maintenance. When a database schema changes, the skill changes.
    3. Building too many skills too fast. Three great skills beat ten mediocre ones.

    What to read next

    How Notion Skills Work, Custom Agents vs Basic, Workers for Agents, Prompt Patterns That Work Inside Notion.

  • Notion Agents vs n8n Alone: When the Workflow Belongs Inside Notion

    Notion Agents vs n8n Alone: When the Workflow Belongs Inside Notion

    The 60-second version

    This isn’t either-or. n8n is the deterministic workflow engine — when X happens, do Y across these 5 apps. Notion Agents are the reasoning layer — given the context, decide whether X actually warrants action and what the right action is. Combined via the n8n MCP bridge, they form a complete automation stack: agent reasons, n8n executes. Operators who treat them as competitors miss the leverage.

    When Notion Agents win

    • The workflow needs to read and synthesize Notion workspace content
    • Natural-language understanding of context matters
    • The “decide whether to act” question is the hard part
    • Schedule-driven autonomous work is the goal
    • The workflow output is itself in Notion

    When n8n wins

    • Pure cross-app data movement (no reasoning needed)
    • Hundreds of integration options matter
    • Visual workflow building with branching logic
    • High-volume deterministic automations
    • Workflows that don’t touch Notion at all

    The combined pattern

    The pattern that’s emerging:
    Notion Agent decides what to do based on context
    n8n workflow executes the cross-app coordination
    – Connected via the n8n MCP bridge inside Notion
    Example: Agent reads new lead in Notion → reasons whether it matches ICP → if yes, calls n8n workflow that updates Salesforce, sends Slack notification, schedules follow-up email.

    What n8n does that Notion Agents don’t

    • Massive integration catalog (Salesforce, Stripe, hundreds of others)
    • Visual flow building
    • High-throughput deterministic execution
    • Self-hosting option for compliance-sensitive use cases

    What Notion Agents do that n8n doesn’t

    • Natural-language understanding of unstructured workspace content
    • Native Notion database manipulation
    • Skills (saved natural-language workflows)
    • Workers for custom code execution
    • Schedule-driven autonomous reasoning

    Where this goes wrong

    1. Trying to do everything in one tool. Reasoning in n8n (limited) or deterministic execution in Notion Agents (expensive) is the wrong direction.
    2. Skipping the MCP bridge. Without it, you re-implement n8n integrations as Workers. Don’t.
    3. Letting agent reasoning replace simple n8n triggers. If the trigger is “row added to database,” that’s deterministic. Just use n8n.

    What to read next

    n8n MCP Bridge, Workers + External APIs, Notion AI vs Zapier, MCP foundation piece.

  • Notion AI vs Microsoft Copilot: Two Philosophies of Embedded AI

    Notion AI vs Microsoft Copilot: Two Philosophies of Embedded AI

    The 60-second version

    The choice is philosophical, not feature-by-feature. Notion AI says: “build your work in one structured workspace and let AI flow through everything.” Microsoft Copilot says: “use the tools you already use and let AI sit inside each one.” Both are valid. Both work. Which fits depends on whether your team’s pattern is consolidated workspace or distributed productivity suite.

    When Notion AI wins

    • You want one unified workspace
    • Custom Agents and scheduled autonomous work matter
    • Database-driven workflows and Autofill are core
    • Smaller teams (under ~200) where Notion’s collaboration model fits
    • Teams that haven’t deeply invested in Microsoft 365

    When Microsoft Copilot wins

    • You’re already deep in Microsoft 365
    • Excel-heavy analysis is core to your workflow
    • Outlook + Teams is your primary collaboration surface
    • Enterprise IT requirements favor Microsoft (compliance, identity, security)
    • Larger orgs where Microsoft’s enterprise plumbing matters

    What Copilot does that Notion AI doesn’t

    • Native deep integration into Excel, Word, PowerPoint, Outlook, Teams
    • Enterprise identity and compliance posture (Azure AD, Purview)
    • Strong Excel-native data analysis with formula generation
    • Teams meeting transcription and recap as a primary surface

    What Notion AI does that Copilot doesn’t

    • Custom Agents running on schedules
    • Workers for code execution
    • The Notion-style structured knowledge graph
    • MCP and n8n integrations
    • More flexible workspace shape

    The IT-procurement layer

    Larger organizations often have IT and procurement preferences that drive this decision more than feature comparison. Microsoft enterprise contracts, identity integration, and compliance posture are real factors. Notion’s enterprise story is improving but Microsoft has decades of head start in that lane.

    Where comparisons go wrong

    1. Comparing feature lists in isolation. Real value is integration depth into the platform you actually use.
    2. Underestimating Microsoft’s enterprise plumbing. For large orgs, identity and compliance are not afterthoughts.
    3. Underestimating Notion’s flexibility. For smaller teams, Notion’s malleability beats Microsoft’s rigidity.

    What to read next

    Notion AI vs Gemini, Notion AI vs ChatGPT, Editorial Surface Area, AI-Native Company Patterns.

  • Notion AI vs Gemini for Workspaces: The Document AI Showdown

    Notion AI vs Gemini for Workspaces: The Document AI Showdown

    The 60-second version

    Most “Notion AI vs Gemini” comparisons miss the actual decision: which platform does your work live in? If you’re a Notion-first team, Notion AI is the integrated answer. If you’re a Google Workspace team, Gemini integrates more deeply into Docs, Sheets, Slides, and Gmail than any third-party AI will. Trying to use both heavily creates context-splitting problems. Pick the platform first. The AI follows.

    When Notion AI wins

    • Your work lives in Notion (databases, pages, agents)
    • You use Custom Agents on schedules
    • Cross-source synthesis across Notion + connected sources matters
    • Database manipulation and Autofill is core to your workflow
    • Multi-app integration via MCP and Workers

    When Gemini for Workspace wins

    • Your work lives in Google Docs, Sheets, Slides
    • Real-time multi-user document collaboration is dominant
    • Email and calendar are the primary surfaces (Gemini’s Gmail integration is strong)
    • Sheets-heavy analysis benefits from Gemini’s native data understanding
    • You’re already paying for Google Workspace

    The stacking question

    Some teams run both. Three patterns that work:
    1. Notion as second brain, Google as collaboration layer. Notion holds structured knowledge; Google holds in-flight collaborative docs.
    2. Notion as agent layer, Google as document factory. Notion runs the agents and synthesis; Google produces the actual docs that get sent.
    3. Drive integration as the bridge. Notion AI reads Google Drive content via integration so the agent can synthesize across both surfaces.

    What Gemini does that Notion AI doesn’t

    • Real-time multi-user editing with AI assistance
    • Sheets-native analysis and chart generation
    • Deep Gmail integration
    • Slides-native design and image generation

    What Notion AI does that Gemini doesn’t

    • Scheduled autonomous agents (Custom Agents)
    • Database property Autofill at the workspace level
    • Workers for code execution
    • The Notion-style structured knowledge graph
    • MCP-based tool integration

    Where comparisons go wrong

    1. Treating raw model quality as the deciding factor. Both use strong models. Integration depth matters more.
    2. Underestimating switching costs. Moving an org for AI reasons is rarely worth it.
    3. Trying to use both heavily. Context splits. Synthesis suffers.

    What to read next

    Notion AI vs ChatGPT, Notion AI vs Microsoft Copilot, Editorial Surface Area, Google Drive Integration.

  • Notion AI vs ChatGPT for Daily Knowledge Work

    Notion AI vs ChatGPT for Daily Knowledge Work

    The 60-second version

    This isn’t a winner-take-all comparison. Notion AI and ChatGPT are different categories of tool that get incorrectly compared because they both use the word “AI.” Notion AI knows your workspace. ChatGPT knows the open web. The right operator stack uses both. The question isn’t which to pick; it’s how to route work between them.

    When Notion AI wins

    • Anything that requires knowing your specific content
    • Synthesis across your databases, pages, and connected sources
    • Document work where the doc lives in your workspace
    • Recurring tasks that benefit from agent automation
    • Mobile use where seamless integration matters

    When ChatGPT wins

    • Open-web research
    • Brainstorming on topics outside your workspace
    • Code generation (currently ChatGPT and Claude lead here)
    • General-purpose Q&A
    • Conversational exploration of ideas

    How they stack

    The pattern that works for most operators: ChatGPT for “thinking out loud” and external research; Notion AI for everything that touches your actual work. Use ChatGPT to draft an idea, then move the polished version into Notion where it joins your actual workspace and Notion AI takes over.

    What ChatGPT does that Notion doesn’t (yet)

    • Image generation
    • Voice conversations as a primary mode
    • Custom GPT marketplace
    • Data analysis on uploaded files at scale

    What Notion AI does that ChatGPT doesn’t

    • Persistent context across your workspace
    • Database manipulation and Autofill
    • Custom Agents running on schedules
    • Workers for code execution
    • Native integration with Slack, Mail, Calendar at the workspace level

    The pricing reality

    ChatGPT Plus is $20/month per user. Notion Business is $20/user/month annually with separate Custom Agent credits ($10/1000) starting May 4. For a team using both heavily, the combined cost is meaningful.

    Where comparisons go wrong

    1. Asking “which is smarter.” They use overlapping models. Raw model intelligence is similar; what differs is integration depth.
    2. Trying to pick one. The right answer is usually both, with clear use-case routing.
    3. Treating ChatGPT memory as equivalent to Notion’s workspace context. ChatGPT memory is conversational. Notion’s context is structured workspace data. Different categories.

    What to read next

    Notion AI vs Claude Projects, Notion AI vs Gemini, Editorial Surface Area, Auto Model Selection.

  • Notion AI vs Claude Projects: Which Belongs in Your Stack

    Notion AI vs Claude Projects: Which Belongs in Your Stack

    The 60-second version

    Notion AI and Claude Projects both let you bring custom context to AI. The difference is what surrounds the AI. Notion AI lives inside a workspace with databases, integrations, schedules, and a team. Claude Projects lives inside a conversation with files, instructions, and the conversation history. For ongoing operational work where the AI needs to be part of how you work, Notion AI fits. For deep focused work where conversation quality is the primary value, Claude Projects fits. Many operators use both.

    When Notion AI wins

    • Persistent operational context across the workspace
    • Custom Agents on schedules
    • Database fluency and Autofill
    • Native integrations (Slack, Mail, Calendar)
    • Team collaboration patterns
    • Mobile and cross-device access

    When Claude Projects wins

    • Deep, focused task work
    • Strong conversation continuity within a topic
    • Specific instruction sets per project
    • File-heavy reference contexts (code, research, large documents)
    • When conversation quality (Claude’s strength) matters more than integration

    The stacking pattern

    The pattern many operators use:
    Notion AI for the ongoing rhythm of work — agents, databases, daily operational synthesis
    Claude Projects for “I need to deeply work on X” sessions — heavy reasoning, complex code, large reference contexts
    The two don’t conflict; they cover different time horizons. Notion AI is always-on background. Claude Projects is intentional focused sessions.

    What Claude Projects does that Notion AI doesn’t

    • File upload context with longer effective memory in-conversation
    • More flexible custom instructions per project
    • Conversation continuity that’s purely Claude-native (no model-switching)

    What Notion AI does that Claude Projects doesn’t

    • Workspace databases and Autofill
    • Scheduled agent execution
    • Native integrations beyond conversation
    • Multi-user collaboration on the same context

    Where comparisons go wrong

    1. Treating them as direct substitutes. They overlap but serve different shapes of work.
    2. Picking based on raw conversation quality alone. That favors Claude. But conversation quality isn’t the whole product.
    3. Picking based on integration breadth alone. That favors Notion. But integration matters more for some workflows than others.

    What to read next

    Notion AI vs ChatGPT, Notion AI vs Gemini, Editorial Surface Area, Custom Agents vs Basic.

  • From Notion AI Drafts to WordPress Publish: A Two-Stage Content Pipeline

    From Notion AI Drafts to WordPress Publish: A Two-Stage Content Pipeline

    The 60-second version

    Drafting in WordPress and fixing problems after publish is the wrong direction. Drafting in Notion and only pushing to WordPress when corpus quality is locked is much stronger. The first stage is where you do the editorial work — multi-model review passes, scoring against a rubric, cross-article coherence checks, persona variant planning. The second stage is where WordPress’s schema, interlinking, and image-handling capabilities run their final treatment. Two stages. Different jobs. Each does what it’s best at.

    What the pipeline looks like

    Stage 1 — Notion foundry:
    1. Articles drafted in a Notion database
    2. Multi-model review passes (Claude, GPT, Gemini, Notion AI)
    3. Quality Score Rubric run on each article
    4. Cross-article coherence and link map check
    5. Variant spawn map populated
    6. Articles foundry-locked at Quality Score 8.5+
    Stage 2 — WordPress drafts:
    1. Push from Notion to WordPress drafts via integration
    2. Schema injection (Article, FAQ, Speakable, BreadcrumbList)
    3. Internal linking against existing WordPress content
    4. Image optimization (WebP conversion, IPTC injection)
    5. AEO refresh (FAQ blocks, PAA structuring)
    6. Final review and scheduled publish

    Why two stages beats one

    The Notion foundry catches problems that WordPress drafts can’t catch. Cross-article duplication, voice drift across the corpus, contradictory claims between articles, persona variant gaps. These show up only when you can see and query the whole corpus at once. WordPress drafts are isolated posts.
    The WordPress stage catches problems Notion can’t catch. Schema validation, real-time link resolution against the live site, image rendering, actual SEO behavior against your indexed pages.
    Each stage covers what the other can’t.

    Where this goes wrong

    1. Skipping the Notion foundry to save time. The foundry is the unique value. Skipping it produces fast publishing of mediocre corpus.
    2. Trying to do the WP-only work in Notion. Schema, image optimization, internal links — these belong in WP. Don’t duplicate.
    3. Manual handoff between stages. Build the Notion-to-WP push as automation. Manual copy-paste loses fidelity.

    What to read next

    Editorial Surface Area, Notion AI for Content Teams, Gates Before Volume, From Drafts to Publish in Strategy.