Tag: Notion AI

  • Notion Isn’t the Everything App. It’s the Everything Database — and That’s a Better Bet.

    Everyone is building the everything app. Microsoft wants to be yours. Google wants to be yours. Notion wants to be yours. But there’s a fourth path nobody is talking about — and it might be the smartest play for brands, agencies, and multi-system operators: don’t pick one everything app. Build one everything database, and let it feed all of them.

    The Core Idea Notion isn’t competing to be your everything app. It’s competing to be your everything database — the structured, queryable, agent-ready source of truth that sits underneath whatever surface you use. The everything app becomes interchangeable. The database is the moat.

    The Series So Far — and Why This Frame Changes Everything

    This is the fourth piece in a series examining who wins the everything-app race. We looked at Microsoft stitching together an everything app through acquisitions, Google trying to unify a native stack it keeps fragmenting, and Notion building from the database up. Each piece treated the everything app as the destination.

    But there’s a reframe worth making. What if the everything app isn’t the destination? What if the destination is the data layer underneath it — and the everything app is just whichever surface happens to be most useful at a given moment?

    That’s the angle that emerged from actually building inside Notion Workers alpha. And it changes the strategic calculus significantly for anyone running a brand, an agency, or a multi-system operation.

    Your Brand Doesn’t Need One Everything App. It Needs One Everything Database.

    Think about what an everything app actually requires to work. It needs to know your tasks. Your projects. Your contacts. Your content calendar. Your pipeline. Your team’s status. Your historical decisions. Your brand voice. Your client relationships. Your automation outputs.

    That’s not an app problem. That’s a data structure problem. And the company that solves the data structure problem — that gives you a clean, typed, queryable, agent-ready home for all of that — wins, regardless of which surface you use to view it.

    Notion’s database architecture is purpose-built for exactly this. Every property is typed. Every row is queryable. Every database can be filtered, sorted, related, and rolled up. When you build your brand’s operational data inside Notion — tasks with statuses, projects with owners, content with metadata, contacts with relationship history — you’re not just organizing. You’re building a structured intelligence layer that agents can read, write, and reason over reliably.

    That database doesn’t care which “everything app” sits on top of it. Microsoft Copilot can query it. Google Workspace agents can sync from it. Your own custom dashboard can read it via the Public API. Claude can operate on it directly. The surface is interchangeable. The database is the thing that compounds in value over time.

    The 30-Second Trigger: Where the Architecture Gets Interesting

    Here’s the piece that came out of our own Workers alpha experience — and it reframes the “30-second sandbox limitation” from a constraint into a feature.

    Notion Workers runs in a 30-second execution window. We hit that wall hard when we tried to move heavy automations — multi-site WordPress optimization passes, content pipelines, image generation — into Workers. Those are multi-minute jobs. They don’t fit.

    But 30 seconds is more than enough to do one specific thing: fire a signed HTTP POST to an external endpoint and return.

    That’s the architectural insight. You don’t use Notion Workers to execute heavy work. You use Notion Workers to trigger it. The Worker wakes up — on a schedule, on a database change, on a webhook — reads the relevant Notion database row, constructs a signed payload, fires a POST to a Google Cloud Run job, and exits. The whole thing takes under five seconds. Well within the 30-second window.

    Cloud Run picks up the job, runs for as long as it needs — minutes, not seconds — and when it’s done, it writes the results back to the Notion database via the Public API. The Notion database is now the job queue, the status tracker, the results store, and the orchestration log. All in one place. All queryable by agents.

    The pattern in practice:

    Notion Worker (cron / DB change / webhook)
      → reads Notion database row for job config
      → signs POST to Cloud Run endpoint
      → returns immediately (3–8 seconds, well under 30s)

    Cloud Run (no time limit)
      → runs heavy job (WP optimization, pipeline, image gen)
      → writes status + results back to Notion DB via Public API

    Notion Database
      → job queue / status tracker / results store / audit log
      → queryable by agents, visible to team, triggerable again

    This is the hybrid architecture we’re running. Our Tuesday 18-site WordPress SEO optimization pass runs on Cloud Run — not because Notion can’t orchestrate it, but because Notion does orchestrate it, as the database layer, while Cloud Run handles the execution. The Worker is the tickle. Cloud Run is the muscle. Notion is the brain that remembers everything.

    What “Brand Everything Database” Actually Means in Practice

    If you’re an agency, a media operation, or a multi-brand operator, here’s the concrete version of this architecture:

    • One Notion workspace as the brand OS. Every client, project, task, content piece, automation job, and decision lives as structured database rows. Not documents. Not folders. Typed, relational data.
    • Agents inside Notion prep the data. Custom agents compile status updates, flag stale work, surface blockers, build briefings — all operating on the Notion database directly. The “everything” data is always clean and current because agents are maintaining it continuously.
    • Workers trigger external execution. When a job needs more than 30 seconds — content pipelines, SEO runs, bulk operations — a Worker fires the trigger. Cloud Run executes. Results come back into Notion. The database stays the source of truth.
    • Any surface can consume it. A Copilot user can query the project database through Microsoft Graph connectors. A Google Workspace user can sync from Notion via the connector ecosystem. A custom dashboard can read the Notion API. The front end doesn’t matter. The database is always current.
    • External agents get full context. Through the External Agents API, Claude, Codex, or any agent you build can operate against your Notion databases with complete organizational context — not a generic AI, but one that knows your specific data, your specific projects, your specific brand.

    Why This Beats Betting on One Everything App

    The everything-app race has a winner-take-all framing that may be wrong. Here’s what we’ve observed from operating across Microsoft, Google, and Notion simultaneously:

    Different team members live in different surfaces. Your developer lives in GitHub and a terminal. Your account manager lives in Gmail. Your ops lead lives in a spreadsheet. Your creative lead lives in Figma. Forcing everyone onto one everything app means fighting human behavior, not working with it.

    But if everyone’s work — regardless of where they do it — writes back into a shared Notion database? The everything app problem disappears. You don’t need everyone in the same surface. You need everyone’s data in the same structure.

    That’s what Notion’s connector ecosystem is actually building toward. GitHub syncs into Notion. Jira syncs into Notion. Salesforce syncs into Notion. Slack syncs into Notion. The surface stays wherever it is. The intelligence layer centralizes.

    The Compounding Advantage

    Here’s the strategic reason this matters beyond the technical architecture: databases compound. Documents don’t.

    A Google Doc from two years ago is mostly dead weight — hard to search, hard to query, impossible for an agent to reason over reliably. A Notion database from two years ago is a living asset. Every row is still queryable. Every relationship still works. The history of every project, every decision, every outcome is structured data that an agent can analyze, pattern-match against, and use to inform current work.

    The longer you run your brand’s operations through a Notion database, the smarter your agents get — because they have more structured history to work with. That’s not true of any document-first system. And it’s not something you can easily replicate once a competitor has two years of structured operational data and you’re starting from scratch.

    The everything app you pick in 2026 matters less than the data structure you commit to in 2026. Pick the wrong everything app and you switch in 18 months. Pick the wrong data structure and you’re rebuilding from zero.

    The Practical Starting Point

    If this architecture makes sense for your operation, here’s how to think about the starting point:

    • Audit what data your business actually runs on. Tasks, projects, clients, content, pipelines, automations — map out what you’re currently tracking and where. How much of it is in documents? How much is in structured databases?
    • Pick the three databases that matter most and build them right in Notion. Don’t try to migrate everything at once. Start with your project tracker, your content calendar, and your client/contact database. Get those typed, relational, and agent-ready.
    • Connect one external source via Workers or the connector ecosystem. Slack, GitHub, Jira — pick the one that generates the most signal for your operation and get it syncing into Notion.
    • Build one Custom Agent that works on those databases. A status compiler, a blocker detector, a briefing builder — something that demonstrates the database-first advantage concretely to your team.
    • Then consider the trigger pattern. What jobs in your operation take longer than 30 seconds but could be triggered from a database change? Those are your first Cloud Run candidates, with Notion as the orchestration layer.

    The everything app race is real. But the more durable competitive advantage is the data structure underneath it. Build the database right, and the everything app becomes a detail.

    Frequently Asked Questions

    What is a “brand everything database” in Notion?

    A brand everything database is a Notion workspace architected as the structured, queryable source of truth for all of a brand’s operational data — tasks, projects, content, clients, automations, decisions. Unlike document-based systems, every piece of information is typed, relational, and agent-readable. External tools sync into it; agents operate on it; any surface can consume it via the Public API.

    How do Notion Workers act as triggers for Google Cloud Run?

    Notion Workers run in a 30-second sandbox — enough time to read a Notion database row, construct a signed payload, and fire an HTTP POST to a Cloud Run endpoint. The Worker returns immediately; Cloud Run handles the long-running execution (minutes, not seconds) and writes results back to the Notion database via the Public API. This makes Notion the orchestration and visibility layer without hitting the sandbox time limit.

    Why is a database-first architecture better than document-first for AI agents?

    Documents require AI to infer structure from prose — an error-prone process that degrades at scale. Database rows are typed, structured, and directly queryable. An agent asking “which projects are blocked this week?” gets an exact filter result from a Notion database in milliseconds; the same question against a folder of Google Docs produces a best-effort summary. Reliability and precision are the key differences.

    Can Notion databases feed Microsoft Copilot or Google Workspace agents?

    Yes, via connectors and the Notion Public API. Microsoft Graph connectors and Google Workspace connectors can sync from Notion databases. Custom agents built on the External Agents API can also read and write Notion data from any external platform. The Notion database becomes the shared source of truth regardless of which AI surface your team prefers.

    What’s the best first step to building a brand everything database in Notion?

    Start with three core databases: a project tracker, a content calendar, and a client/contact database. Get them typed with proper properties, linked relationally, and cleaned up. Then build one Custom Agent that operates on those databases — a status compiler or briefing builder. Once you’ve seen the database-first advantage in action, the architecture for connecting external tools and Cloud Run triggers becomes obvious.

  • Notion’s Database-First Bet: Why the Everything App Might Be Built on a Spreadsheet, Not a Document

    Microsoft is stitching together an everything app from acquisitions. Google is trying to unify a native stack it keeps fragmenting. Notion is doing something different — and arguably more interesting. It’s building the everything app from the database up, and it just made its most important move yet.

    Definition: The Database-First Everything App An AI-powered workspace where every piece of information — tasks, projects, docs, contacts, data — lives in a structured, queryable database, and agents can read, write, reason over, and act on that data autonomously. The database isn’t the backend. It’s the interface.

    Yesterday Changed Everything for Notion

    On May 13, 2026 — yesterday — Notion shipped version 3.5 and announced their full Developer Platform in a livestreamed product event. The tech press covered it as an AI agent story. They weren’t wrong, but they missed the bigger frame.

    Notion didn’t just add agents. They introduced a new primitive called Workers — a hosted runtime for custom code that lets teams extend Notion without running their own servers. Database sync, agent tools, and webhook triggers all run through Workers. They launched the External Agents API, allowing any agent — ones you built, or ones from Claude, Codex, Decagon, and other partners — to work natively inside your Notion workspace. And they opened a developer platform that lets teams connect AI agents, external data sources, and custom code directly into their workspace.

    Taken individually, these are nice product updates. Taken together, they’re an orchestration play. Notion is positioning itself not as a note-taker with AI features bolted on, but as the hub where people, agents, and data collaborate across every tool a team uses.

    The Database Advantage Nobody Else Has

    Here’s the thing that separates Notion from every other everything-app candidate — including Microsoft and Google.

    Both Microsoft 365 and Google Workspace are document-first platforms. Their fundamental unit of work is a file: a Word document, a Google Doc, a PowerPoint, a Sheet. Files are great for humans to read. They’re terrible for AI to reason over at scale. You can’t ask an AI agent to “find every project where the status is blocked and the deadline is this week” across a folder of Word documents and get a reliable answer.

    Notion’s fundamental unit is a database. Every page can be a database row. Every property is structured, queryable, filterable data. When Notion AI looks at your workspace, it doesn’t see a pile of documents — it sees a relational knowledge graph. Tasks have statuses. Projects have owners and deadlines. Contacts have properties. Everything is connected, typed, and queryable.

    That’s not a feature difference. That’s an architectural difference. And it’s why Notion’s agents can do things that Copilot and Gemini agents fundamentally struggle with: operate reliably on your actual organizational data, not summaries of your documents.

    The Agent Timeline: Faster Than Anyone Expected

    Notion’s agent rollout has moved at a pace that’s easy to underestimate if you haven’t been watching closely. Here’s the actual timeline:

    • September 18, 2025 — Notion 3.0: Agents. First AI agents launch. Autonomous data analysis and task automation. The starting gun.
    • January 20, 2026 — Notion 3.2. Mobile AI, new model support, people directory. Agents go everywhere, not just desktop.
    • February 24, 2026 — Notion 3.3: Custom Agents. Users can build their own agents from scratch. Over 21,000 custom agents built in the first free trial period alone. Notion reported 2,800 agents running 24/7 internally at Notion itself.
    • March 2026. Workers introduced in alpha — a TypeScript-based framework for agents to talk to any service with an API. The coding layer for power users.
    • April 14, 2026 — Notion 3.4. Calendar and inbox connectors. Notion AI can now schedule meetings and draft emails from inside your workspace.
    • May 5, 2026. Custom Agent admin controls for enterprise — workspace-level credit limits, governance tools, compliance features.
    • May 13, 2026 — Notion 3.5: Developer Platform. External Agents API, Workers out of alpha, database sync with no servers, full developer ecosystem launched.

    That’s eight months from first agent launch to full developer platform. For context, Microsoft spent years building Azure OpenAI integration before Copilot reached feature parity with what Notion shipped in less than a year.

    What the Notion Everything App Actually Looks Like Today

    This isn’t theoretical. Here’s what a team running on Notion can configure right now:

    • Your project data, always current. Databases synced from Slack, Google Drive, GitHub, Jira, Microsoft Teams, Salesforce, and Box — all flowing into Notion databases in real time, powered by Workers. No manual updates. No stale spreadsheets.
    • Agents watching your work. Custom agents triggered by database changes, schedules, or webhooks — compiling status updates, flagging blocked tasks, escalating overdue items, answering team FAQs.
    • Your inbox and calendar inside your workspace. Connect Gmail or Outlook and your calendar; Notion AI can schedule meetings and draft emails without leaving the tool your work already lives in.
    • External agents working in your context. Claude, Codex, Decagon — agents you’ve built yourself via the External Agents API — all operating against your Notion databases with full context. Not generic AI. AI that knows your specific data.
    • Plan Mode for complex operations. Before an agent makes large changes to your databases or pages, it stops, asks clarifying questions, and builds a plan for your approval. This is the governance layer that makes AI trustworthy in a business context.
    • Your institutional knowledge, always accessible. Every decision, every project history, every team document — structured and queryable by agents that can synthesize across your entire knowledge base on demand.

    The Model Behind It: Claude Opus 4.7

    Unlike Microsoft (Copilot runs on GPT-4o and Azure OpenAI) and Google (Gemini family), Notion is built on Anthropic’s Claude. As of the January 2026 update, Notion runs Claude Opus 4.7 — Anthropic’s most capable model at the time of release — for its AI features and agent reasoning.

    This is a strategic choice worth examining. Claude is specifically designed with a focus on reliability, honesty, and safe behavior in agentic contexts — qualities that matter enormously when an AI agent has write access to your company’s databases. Anthropic’s Constitutional AI training approach was built for exactly the kind of autonomous, long-running agent work that Notion is deploying.

    The Notion + Claude combination isn’t just a vendor relationship. It’s an architectural alignment: a database-first workspace built on a model specifically designed for trustworthy agentic behavior. That’s a more coherent stack than either Microsoft or Google has assembled, where the AI model and the productivity platform were developed independently and integrated later.

    Why “Database First” Beats “Document First” for the Everything App

    Let’s make this concrete with a comparison most teams will recognize.

    Ask Microsoft Copilot: “Which of our client projects are behind schedule this quarter?” Copilot will search your emails, scan your SharePoint documents, and produce a reasonable summary — but it’s reading prose, inferring structure, and hoping the documents are up to date. The answer is a best-effort synthesis, not a query result.

    Ask a Notion agent the same question: it runs a database filter. Status = Behind. Quarter = Q2 2026. It returns an exact list in under a second, with links to every project, the responsible person, and the last update — because that data is structured. The agent didn’t infer anything. It read typed data.

    That’s the difference between AI that helps you find things and AI that actually knows things. Notion’s database architecture is what makes the second kind possible at scale, without hallucination, without retrieval errors, without the AI making up a project that doesn’t exist.

    The Honest Weakness: The 30-Second Wall

    Here’s what you only learn by actually building inside the alpha — and we did.

    Notion Workers runs in a 30-second sandbox with 128MB of memory. Each Worker is created through the Notion control panel, taking 3–5 minutes to spin up. The network is limited to an approved domain allowlist. Storage is ephemeral — nothing persists between runs. These aren’t theoretical constraints. They’re the real walls you hit when you try to move serious automation workloads into Notion.

    We were in the Workers alpha. We built Workers. We set up custom agents. And we stress-tested the sandbox deliberately — forcing failures to find the exact break points, then running production workloads at 60% of the known ceiling as a stability rule. That’s the only honest way to operate inside a system this constrained: know where it breaks before you depend on it.

    What we found changed our architecture. Heavy automations — multi-site WordPress SEO optimization passes across 18 sites, content pipelines, image generation, WP-CLI batch operations — couldn’t live inside Notion Workers. They’re multi-minute jobs, not 30-second jobs. Moving them to Notion would have meant engineering workarounds that added complexity without adding reliability.

    So instead of moving Cowork automations into Notion as we originally planned, we moved them to Google Cloud Run. The notion-deep-extractor (crawls the workspace, extracts structured knowledge, logs to the Second Brain database — runs 3x daily) and the notion-maintenance bundle (archive sweeper, stale work detector, content guardian — runs daily at 6am UTC) all live on Cloud Run now, with Cowork scheduled tasks paused. The 18-site WordPress optimizer running Tuesday? Cloud Run. Not Notion.

    This isn’t a knock on Notion. It’s an architectural reality that every builder needs to understand before they commit workloads. The right pattern — the one we’re now using and that Notion’s own documentation points toward — is Notion Workers as the trigger layer, Cloud Run as the execution layer. A Worker fires a signed POST to a Cloud Run endpoint, returns immediately (well under 30 seconds), Cloud Run runs the heavy job, then writes results back to a Notion database via the Public API. You get Notion as the orchestration and visibility layer without hitting the sandbox wall.

    That hybrid is genuinely powerful. But it requires infrastructure that most small teams don’t have. If you don’t have a Cloud Run setup, a service account, and the deployment knowledge to wire this together, the 30-second limit will stop you cold on anything more complex than a lightweight API call or a database update.

    Notion doesn’t own email. It connects to Gmail and Outlook. It doesn’t own a calendar — it integrates with yours. It doesn’t have a mobile OS or browser. Those gaps matter less than the sandbox constraint does for real production workloads. The everything app story is real — but the execution layer has hard limits that require a hybrid architecture to work around, at least until Workers matures beyond its current beta constraints.

    Who Should Be Paying Attention Right Now

    If you’re an agency, a service business, a content operation, or any knowledge-work team that already uses Notion — or has been considering it — the May 13 Developer Platform announcement changes your calculus significantly.

    Custom Agents are available as an add-on for Business and Enterprise plans. Workers are free during the current beta period (billing starts August 11, 2026). The External Agents API is open now. This is the window to build before your competitors do.

    The teams that spend the next 90 days wiring up their Notion databases, building their first custom agents, and connecting their external data sources will have a compounding advantage that’s very hard to replicate in 2027. The institutional knowledge that feeds these agents — the project histories, the SOPs, the client databases — takes time to build. Starting now is the only strategy that works.

    The Bigger Picture: A Series on Who Wins the Everything App

    This is the third article in an emerging pattern I’ve been thinking through: who actually builds the everything app, and what does their path look like?

    Microsoft is building it through acquisitions and Copilot, stitching together LinkedIn, Azure, and the M365 suite. Google already owns the native stack — Gmail, Drive, Search, Android — and is trying to unify it through Gemini Enterprise and Workspace Studio after years of product fragmentation. Notion is building it from the database up, betting that structured data plus open agents beats document-first platforms with AI bolted on.

    None of them has won yet. All three bets are live. The winner won’t be the company with the most features — it’ll be the one that earns enough trust to become the single place where your work actually lives.

    Notion’s database-first architecture is the most interesting bet of the three. It’s also the most fragile — dependent on integrations, constrained by not owning the OS or the inbox, limited by whatever Anthropic does with Claude pricing and capabilities. But if it works, it works in a way the others can’t easily copy. You can’t retrofit a database architecture onto a document platform. You have to start over.

    Microsoft and Google aren’t starting over. Notion never had to.

    Frequently Asked Questions

    What are Notion Custom Agents?

    Notion Custom Agents are AI teammates that handle repetitive tasks autonomously — answering FAQs, compiling status updates, automating workflows — triggered by schedules, database changes, or webhooks. They launched in February 2026 (Notion 3.3) and are available as an add-on for Business and Enterprise plans. Over 21,000 were built during the free trial period alone.

    What is Notion Workers?

    Notion Workers is a hosted cloud runtime for custom TypeScript code, introduced in alpha in March 2026 and fully launched with the Developer Platform on May 13, 2026. It powers database sync, agent tools, and webhook triggers — letting teams extend Notion to connect any service with an API, without running their own servers. Workers are free during the beta period through August 10, 2026.

    What AI model does Notion use?

    Notion runs on Anthropic’s Claude — specifically Claude Opus 4.7 as of the January 2026 update. This is different from Microsoft Copilot (which uses OpenAI’s GPT models) and Google Workspace (which uses the Gemini family). Notion’s choice of Claude reflects an emphasis on reliable, safe agentic behavior for workflows that have write access to business databases.

    What is the Notion External Agents API?

    The External Agents API, launched with Notion 3.5 on May 13, 2026, lets teams bring any AI agent — including ones built internally or from partners like Claude, Codex, and Decagon — directly into their Notion workspace. These external agents can read and write to Notion databases with full context about the team’s data.

    How is Notion different from Microsoft Copilot and Google Workspace AI?

    Notion is database-first. Every piece of information in Notion is structured, typed, and queryable data — not documents. This means Notion agents can run precise database queries against your actual organizational data rather than inferring structure from prose documents. For teams that need AI to reliably operate on business data (not just search and summarize), this architectural difference is significant.

    What are the real limitations of Notion Workers in the alpha?

    Notion Workers runs in a 30-second sandbox with 128MB of memory and ephemeral storage. Network access is limited to an approved domain allowlist. Workers are created via the Notion control panel (3–5 minutes each). Long-running jobs — content pipelines, multi-site operations, image generation — won’t fit. The recommended pattern for serious workloads is Notion Workers as the trigger layer firing a signed POST to an external execution environment (like Google Cloud Run), with results written back to Notion databases via the Public API.