Tag: Everything App

  • Elon Musk Isn’t Building the Everything App—He’s Building the Everything App’s Power Grid

    The Pivot in One Sentence
    xAI has merged into SpaceX and leased its Colossus 1 supercluster—220,000 NVIDIA GPUs, 300 megawatts of compute—entirely to Anthropic, while simultaneously targeting 2 gigawatts of total capacity at Memphis. Elon Musk is no longer primarily trying to win the AI model race. He’s becoming the AI industry’s infrastructure landlord.

    Earlier in this series, we asked whether Grok and xAI were building the everything app through X—the social-financial superapp thesis. The answer we arrived at was: maybe, but with real limitations on the model quality and consumer trust needed to pull it off.

    Then something happened that reframed the entire question. In early May 2026, xAI merged into SpaceX. Days later, Anthropic—one of xAI’s most direct AI competitors—announced it was renting the entire compute capacity of Colossus 1. All 220,000 GPUs. All 300 megawatts. For Claude. For a reported $3 to $6 billion per year.

    Musk’s comment when asked about leasing infrastructure to a competitor: “No one set off my evil detector.”

    That’s the tell. When you’re building the everything app, you don’t rent your most powerful asset to your rivals. You use it. The fact that Musk is doing exactly that reveals a strategic logic that the Grok-as-everything-app frame completely misses.

    The pivot isn’t from everything app to compute landlord. It’s the recognition that owning the power grid is more valuable than owning any single app that runs on it.

    What Colossus Actually Is

    Colossus is not a single data center. It’s a multi-building supercomputing complex in Memphis, Tennessee—and it is currently the largest single-site AI training installation in the world.

    Colossus 1, the original facility, holds H100, H200, and GB200 accelerators across more than 220,000 GPU units. That is the cluster Anthropic is now renting entirely.

    Colossus 2, the expansion xAI is keeping for its own Grok development, has already expanded to 555,000 NVIDIA GPUs with approximately $18 billion in hardware investment and 2 gigawatts of target power capacity—reached in January 2026 with the purchase of a third Memphis building. Musk’s stated goal: one million GPUs at the Memphis complex, with more AI compute than every other company combined within five years.

    As a point of reference: most frontier AI labs operate training clusters in the tens of thousands of GPUs. Microsoft’s Azure AI infrastructure, the largest hyperscaler allocation for AI, operates in the hundreds of thousands across distributed global regions. Colossus at 555,000+ GPUs in a single complex is a different category of infrastructure entirely.

    And Musk has publicly noted that xAI is only using about 11% of its available compute for Grok. The rest is—in his framing—available. Available to sell. Available to rent. Available to become the compute backbone of the AI industry whether xAI wins the model race or not.

    The xAI-SpaceX Merger: What It Actually Means

    The May 2026 merger of xAI into SpaceX as an independent entity is more than an org chart change. It’s a signals-to-strategy reveal.

    SpaceX has three things xAI needs at scale: capital (SpaceX generates billions in launch revenue annually), real estate and construction expertise (SpaceX builds rockets and factories at speed), and most critically—rockets. Starship can put mass into orbit economically in a way no other launch vehicle can. SpaceX is already moving toward a Starlink constellation of thousands of satellites. The infrastructure to extend that into orbital data centers is not theoretical.

    Anthropic’s announcement noted not just the Colossus 1 ground lease—it also expressed interest in working with SpaceX to develop multiple gigawatts of compute capacity in space. Orbital data centers. Satellite-delivered AI compute. The kind of infrastructure that has zero latency for any application that needs compute without a physical data center address.

    Musk has discussed launching a million data-center satellites as a longer-term infrastructure play. That number sounds unreasonable until you consider that SpaceX already operates over 7,000 Starlink satellites and is building Starship specifically for high-volume orbital delivery. The orbital compute thesis isn’t science fiction for SpaceX. It’s a product roadmap.

    What the xAI-SpaceX merger does is remove the pretense that these are separate businesses. They’re one integrated infrastructure play: ground-based GPU superclusters plus orbital compute capacity, connected by the world’s only commercially viable heavy-lift reusable rocket.

    The Anthropic Deal: A Strategic Reading

    Let’s be specific about what this deal represents for both sides.

    For Anthropic, the deal addresses an acute bottleneck. Anthropic’s annualized revenue grew from roughly $9 billion at end of 2025 to approximately $30 billion by early April 2026—a trajectory that implies an 80-fold increase in usage in Q1 alone. Claude Pro and Claude Max subscriber growth is outpacing Anthropic’s ability to provision compute fast enough. Renting Colossus 1 immediately unlocks 300 megawatts of capacity that would take 18-24 months to build from scratch. For Anthropic, this is a compute emergency solution with strategic upside.

    For xAI, the deal is more nuanced. Colossus 1 was already built and operational. xAI is keeping Colossus 2 for Grok development. Renting Colossus 1 generates—depending on which analyst estimate you use—between $3 billion and $6 billion annually in revenue while the asset runs at capacity rather than sitting idle. That revenue funds Colossus 2 expansion, Colossus 3, and whatever comes next. The compute landlord model is self-funding.

    The strategic implication: xAI doesn’t need Grok to win the model race for this business model to work. If Claude dominates, Anthropic needs more compute and pays xAI for it. If GPT dominates, OpenAI and its partners need more compute. If Gemini dominates, Google builds its own, but every smaller lab comes to whoever has available capacity. xAI wins in every scenario except the one where everyone else simultaneously builds their own supercomputing megacomplexes—which requires the capital and construction expertise that most AI labs don’t have.

    The Grok Situation: Honest Assessment

    The Anthropic deal does raise real questions about Grok’s trajectory. Grok app downloads have reportedly declined significantly in 2026 as ChatGPT and Claude have gained consumer mindshare. In April 2026, Elon Musk testified in the ongoing OpenAI litigation that xAI trained Grok on OpenAI model outputs—a revelation that raised questions about Grok’s training methodology and original capability claims.

    If xAI is using only 11% of its compute for Grok and is renting the rest to a competitor, the implicit message is that xAI is not currently running a max-effort campaign to win the frontier model race. It’s building infrastructure and waiting—or pivoting to a business model where the model race outcome matters less.

    This is not necessarily a failure. It may be a more durable strategy. The history of technology infrastructure is full of examples where the company that built the picks and shovels during a gold rush outlasted the miners. AWS didn’t win by building the best e-commerce site. It built the infrastructure that every e-commerce site ran on. The question is whether xAI’s compute infrastructure can fill that role for AI—and the Anthropic deal is the first real evidence that the answer might be yes.

    The “Everything App Ability” Thesis

    Here’s the reframe that this pivot suggests: maybe the right question isn’t which company will build the everything app. Maybe the right question is which company will own the infrastructure that makes the everything app possible for everyone else.

    Every company in this series—Microsoft, Google, Notion, OpenAI, Perplexity, Mistral, Zapier—needs compute. Massive, reliable, cost-effective GPU compute. The frontier model companies are burning through capital building their own clusters because the alternative is depending on hyperscalers (AWS, Azure, GCP) that charge premium rates and may eventually compete directly.

    xAI with Colossus is offering a third option: AI-native compute infrastructure, built by a company that doesn’t directly compete on most application layers, at a scale that’s difficult to replicate, at a location (Memphis) with power grid access that many coastal data center markets can’t match.

    If you’re building the everything app and you need the compute to run it—Colossus may become the place you go when AWS is too slow, Google is a competitor, and building from scratch takes two years you don’t have.

    That’s not the everything app. That’s the everything app’s power grid. And historically, the entity that owns the power grid captures durable, compounding value regardless of which specific applications win the consumer layer.

    Space: The Long Game

    The orbital compute angle deserves more than a footnote because it’s where this thesis could either collapse into fantasy or become genuinely transformative.

    The practical case for orbital data centers is latency equalization: compute in low Earth orbit can serve any point on the Earth’s surface within milliseconds, without the geographic concentration that makes terrestrial data centers vulnerable to regional power outages, natural disasters, or regulatory shutdown. For AI applications that need global deployment at consistent latency—real-time translation, autonomous vehicle coordination, financial systems—orbital compute offers something no ground-based data center geography can.

    SpaceX’s Starship dramatically changes the economics of getting mass to orbit. Current launch costs for payloads are measured in thousands of dollars per kilogram. Starship’s target is hundreds of dollars per kilogram—an order-of-magnitude reduction that makes orbital infrastructure financially viable in a way it never was before. The satellite internet analogy is instructive: Starlink was also considered impractical until SpaceX dramatically reduced launch costs, then deployed at a scale that changed the calculus entirely.

    Anthropic’s stated interest in orbital compute capacity with SpaceX isn’t a polite corporate gesture. It’s Anthropic hedging its long-term compute dependency on a technology only SpaceX can currently deliver. If even a fraction of that orbital compute vision materializes, xAI/SpaceX’s infrastructure moat becomes essentially unreplicable by any company that doesn’t own a heavy-lift reusable rocket program.

    What This Means for the Everything App Race

    The xAI infrastructure pivot doesn’t remove Grok and X from the everything app conversation entirely. X still has the distribution, the data firehose, the financial services ambitions, and the brand. Those don’t disappear because Colossus 1 is now running Claude.

    But it does add a second thesis that may ultimately matter more: xAI as the infrastructure layer beneath the entire AI economy. Not the everything app—the everything app’s foundation.

    In the history of platform technology, the company that owns the infrastructure layer almost always captures more durable value than the company that owns any individual application. TCP/IP outlasted every early internet application. AWS became more valuable than most of the businesses it hosts. The cloud didn’t belong to any one software company—it belonged to the infrastructure providers who made software deployment cheap and fast.

    If the AI era follows the same pattern, the question isn’t who builds the best everything app. It’s who builds the infrastructure that makes every everything app possible. And as of May 2026, the most credible answer to that question involves 555,000 GPUs in Memphis, a rocket program that can reach orbit, and a business model that profits whether Grok wins or loses.

    Key Takeaway

    Elon Musk pivoted xAI from model competitor to infrastructure landlord. By merging into SpaceX, leasing Colossus 1 to Anthropic, and targeting 2 gigawatts of Memphis compute capacity plus orbital data centers, xAI is positioning to capture value from the AI economy regardless of which application layer wins—the power grid, not the appliance.

    Related Reading

    This article grew out of our everything app series. If you’re tracking where AI consolidation is heading, the full series maps the competitive landscape from nine angles:

    Frequently Asked Questions About xAI, Colossus, and the Compute Landlord Pivot

    Why did xAI merge into SpaceX?

    xAI merged into SpaceX in May 2026 as an independent entity within the broader Musk enterprise. The merger combines xAI’s AI development capabilities with SpaceX’s capital generation, construction expertise, and—critically—rocket launch capabilities. This integration enables the orbital compute strategy: deploying data center satellites via Starship at dramatically lower cost than any competitor could achieve.

    What is the Anthropic-Colossus deal?

    In May 2026, Anthropic agreed to rent the entire compute capacity of Colossus 1—xAI’s first Memphis supercluster, comprising 220,000+ NVIDIA GPUs and 300 megawatts of power. The deal directly addresses Anthropic’s acute compute shortage during a period of explosive Claude usage growth. Anthropic’s annualized revenue grew from roughly $9 billion at end of 2025 to approximately $30 billion by April 2026. Analysts estimate the deal generates between $3 billion and $6 billion annually for xAI/SpaceX.

    How large is the Colossus supercomputer complex?

    As of early 2026, the Colossus complex in Memphis spans three buildings and targets 2 gigawatts of total compute capacity. Colossus 2 (kept by xAI for Grok development) has reached 555,000 NVIDIA GPUs with approximately $18 billion in hardware investment. Long-term targets include one million GPUs at the Memphis site. It is currently the largest single-site AI training installation in the world.

    What are orbital data centers and why does xAI/SpaceX care about them?

    Orbital data centers are computing facilities deployed in low Earth orbit, delivered by rocket. They offer latency equalization (serving any point on Earth within milliseconds), elimination of geographic concentration risk, and compute capacity outside any single regulatory jurisdiction. SpaceX’s Starship reduces launch costs by an order of magnitude compared to existing vehicles, making orbital compute economically viable for the first time. Anthropic’s participation in the deal included expressed interest in developing multiple gigawatts of orbital compute capacity with SpaceX.

    Does the compute landlord strategy mean xAI is giving up on Grok?

    Not necessarily, but the signals are mixed. xAI is reportedly using approximately 11% of its available compute for Grok development—the rest is available to lease. Grok app downloads have declined in 2026, and April 2026 litigation revealed Grok was trained on OpenAI model outputs. The Colossus 1 lease to Anthropic is the clearest evidence that xAI is not running a maximum-effort campaign on frontier model development and is instead diversifying into infrastructure revenue.

    How does the xAI infrastructure play relate to the everything app thesis?

    The xAI pivot suggests a reframe of the everything app question. Rather than competing to be the app users interact with daily, xAI/SpaceX is positioning to own the compute infrastructure that powers any everything app—what we’re calling the “everything app’s power grid.” Historically, infrastructure layer companies (AWS, TCP/IP, electricity grids) capture more durable value than any individual application running on top of them. The Anthropic deal is the first concrete evidence that this model may work at AI scale.

  • Is Zapier Building the Everything App? The Connector That Became an Orchestrator

    What Is Zapier?
    Zapier is a no-code automation platform founded in 2011 that connects over 8,000 apps through a unified workflow engine. Originally built around simple “if this, then that” triggers, Zapier has transformed in 2025–2026 into an AI orchestration platform—adding autonomous agents, multi-model AI routing, natural language workflow building, and an MCP server that exposes its entire integration library to external AI models including Claude.

    Every company in this series has come at the everything app from a position of strength. Microsoft from enterprise software. Google from search. OpenAI from the frontier model. Mistral from sovereignty and open source. But none of them started where Zapier started: already inside your workflows, connected to every tool you use, trusted with the actual operations of your business.

    That’s the sleeper advantage in this race. While everyone else is building toward the everything app from the outside in, Zapier has been inside the everything app since the day you first connected your Gmail to your CRM.

    The question is whether a 13-year-old automation company can evolve fast enough to own the AI orchestration layer—or whether it becomes the platform that makes everyone else’s AI more powerful.

    📚 Everything App Series

    This is article 9 in our ongoing series examining which AI companies are building the everything app:

    The Transformation: From Connector to Orchestrator

    For most of its first decade, Zapier’s value proposition was simple: connect two apps without writing code. You set a trigger (“when I get a new email in Gmail”), define an action (“add a row to my Google Sheet”), and Zapier ran the automation in the background. Powerful, but fundamentally passive. Zapier did what you told it to do.

    In 2025, that changed fundamentally. Zapier relaunched its positioning as an AI Orchestration Platform and shipped three products that move it from passive connector to active AI layer:

    Zapier Copilot lets you describe a workflow in plain language and watch Zapier build it. Instead of manually connecting triggers and actions, you say “whenever a new lead comes in from our website form, research them on LinkedIn, score them, and add the qualified ones to our CRM with a draft follow-up email.” Copilot builds the multi-step Zap. This collapses the skill barrier that kept many users on simpler workflows.

    Zapier Agents, launched in January 2025 and reaching general availability in December 2025, are autonomous AI teammates. Unlike Zaps (which follow a fixed sequence), Agents decide how to accomplish a goal. You give an Agent a role—”you are our inbound lead coordinator”—a set of tools from Zapier’s app library, and a goal. The Agent reasons through the task, calls the appropriate tools in whatever order makes sense, handles exceptions, and reports back. In August 2025, Zapier added agent-to-agent orchestration, letting Agents delegate subtasks to specialist Agents—the first multi-agent architecture available to non-developers at scale.

    Zapier Canvas is the visual command center that maps how all of this fits together: your Zaps, Tables, Interfaces, Chatbots, and Agents displayed as a connected system. Canvas makes the invisible visible—you can finally see the full automation architecture of your business and edit it from a single surface.

    The 8,000-App Moat

    Here’s the number that matters more than any AI feature: 8,000 connected apps.

    Building an AI integration with a single app is straightforward. Building reliable, maintained, authenticated integrations with 8,000 apps—including niche tools that serve specific industries, legacy enterprise software, and the long tail of SaaS that most AI companies ignore—is a 13-year infrastructure investment that no new entrant can replicate quickly.

    Every AI model that wants to take actions in the real world faces the same problem: getting access to the apps where work actually happens. OpenAI is building these integrations one by one. Google has its own ecosystem but a limited integration library beyond Workspace. Microsoft covers the Office stack but leaves everything else to third parties.

    Zapier already has the connectors. That means Zapier Agents can operate across your full stack on day one—not the curated stack of apps a closed AI platform supports, but the actual combination of tools your business uses, however idiosyncratic.

    Zapier MCP: The Move That Changes the Competitive Map

    The most strategically significant product Zapier shipped in 2025 wasn’t Agents. It was Zapier MCP.

    Model Context Protocol (MCP) is the emerging standard that lets AI models call external tools. Zapier built an MCP server that exposes its entire integration library—all 8,000+ apps, tens of thousands of actions—to any AI model that speaks MCP. Claude can use it. GPT-4o can use it. Any MCP-compatible AI can use it.

    This is Zapier making a platform bet rather than a product bet. Instead of trying to be the AI model that users talk to, Zapier is becoming the action layer that every AI model reaches into when it needs to do something in the real world. The developer and coding agents plug in through the SDK. The AI assistants plug in through MCP. IT administrators see everything through unified audit logs and governance controls.

    Zapier is an official Anthropic integration partner. When Claude users need their AI to actually send an email, update a CRM record, add a calendar event, or post to Slack—Zapier is the infrastructure doing that work. That’s not a small bet. That’s positioning as the execution layer for the entire AI industry.

    The Financial Position: Profitable, Independent, Patient

    One underappreciated aspect of Zapier’s strategic position is its financial independence. Unlike most AI companies burning through venture capital at extraordinary rates, Zapier has been profitable for years. It has raised minimal external funding—approximately $1.4 million in a 2012 seed round and nothing significant since—and generates its own growth from revenue.

    Revenue reached $310 million in 2024 and is projected to approach $400 million in 2025. The company serves over 100,000 business customers. Its valuation is estimated around $5 billion—modest relative to OpenAI, Anthropic, or Mistral’s recent rounds, but built on actual cash flow rather than projected futures.

    This matters for the everything app question because Zapier is not under pressure to show explosive AI growth to justify a valuation. It can evolve its platform deliberately, double down on enterprise reliability, and build the trust that enterprise automation requires—without the distraction of a fundraising cycle or the fear of running out of runway.

    Zapier’s Approach to Enterprise AI Governance

    One of the signal differences between Zapier’s AI platform and its competitors is the emphasis on controls alongside capability. The February 2026 product updates focused specifically on AI guardrails and governance: who can create agents, what apps agents can access, what actions require human approval, and full audit logs of everything that ran.

    This is the unsexy but critical work of making AI deployable in regulated environments. An autonomous agent that can send emails, update databases, and call external APIs is a significant liability risk without proper governance. Zapier’s enterprise controls—managed credentials, admin dashboards, approval workflows for high-risk actions, comprehensive audit trails—represent years of enterprise trust-building that AI-first startups are only beginning to think about.

    The AI guardrails feature allows administrators to set boundaries on what Agents can do autonomously versus what requires a human in the loop. This isn’t a limitation on Zapier’s AI ambitions—it’s the feature that gets Zapier past the enterprise security review that blocks most AI tools from production deployment.

    The Notion Everything Database Connection

    If you’re using Notion as an everything database—as we explored earlier in this series—Zapier is one of the most powerful connectors in your stack. Zapier’s Notion integration supports triggers on database property changes, creating and updating pages, querying databases, and more. Zapier Agents can use these Notion actions as tools, meaning an Agent can reason about your Notion data, make decisions, and update records—all without you touching a line of code.

    The practical architecture looks like this: your Notion everything database stores structured business context. A Zapier Agent monitors specific triggers (a new record appears, a property changes, a status updates). The Agent pulls relevant context from Notion, reasons over it using its AI model, takes actions across your other connected apps, and writes results back to Notion. The entire workflow runs in the background, governed by your Zapier admin controls, with full audit logs.

    For teams building on the Notion everything database model, Zapier isn’t competing with that architecture—it’s the automation and agent layer that makes it operational. You design the data model in Notion; Zapier handles the movement and the intelligence on top of it.

    Where Zapier Falls Short

    Zapier’s everything app candidacy has real limits, and they’re worth naming plainly.

    First, Zapier is a B2B tool that has never built meaningful consumer presence. Everything apps in the historical sense—WeChat, Line, Grab, Gojek—succeed by capturing daily personal habits: messaging, payments, food delivery. Zapier operates in the workflow automation category, which is powerful for businesses but invisible to consumers. There is no path from Zapier’s current position to consumer everything app.

    Second, Zapier depends on the apps in its library. If OpenAI, Google, or Microsoft decides to deprecate their public APIs or make integration prohibitively expensive, Zapier’s connectors break. The 8,000-app moat is only as strong as those 8,000 companies’ continued willingness to maintain open APIs. As AI platforms consolidate, that willingness may erode.

    Third, Zapier’s AI layer is not a frontier model. Zapier Agents use third-party models (primarily OpenAI’s GPT-4o and related) for their reasoning capabilities. This means Zapier’s AI quality ceiling is set by someone else. When OpenAI ships a better model, Zapier agents get smarter—but so does every OpenAI customer. Zapier cannot differentiate on model quality the way Mistral or OpenAI can.

    Finally, the no-code positioning that made Zapier accessible also limits its ceiling. Complex enterprise workflows—the kind that justify serious AI investment—often require the custom logic, error handling, and integration depth that Zapier’s visual interface makes difficult. Competitors like n8n (open-source), Make (formerly Integromat), and enterprise-focused platforms like MuleSoft are taking direct aim at the workflows Zapier can’t handle.

    The Verdict: The Action Layer, Not the Interface Layer

    Is Zapier building the everything app? Not in the way the term is usually understood. Zapier is not trying to be the app you open every morning, the one that knows your identity, your preferences, and your social graph. It has no interest in capturing your attention or your feed.

    Zapier is building something that might matter more for AI’s actual impact on work: the universal action layer. The layer that every AI model reaches into when it needs to do something that matters. The layer that connects AI reasoning to business reality across the entire software ecosystem—not the 50 apps in one company’s walled garden, but the 8,000 apps that businesses actually use.

    In a world where every AI platform is competing to be your interface, Zapier is quietly becoming the infrastructure that makes any interface actually work. That’s not the everything app thesis. It’s the everything execution thesis. And given that 13 years of profitable growth and 100,000 enterprise customers are backing it, it may be the most durable bet in this entire series.

    Key Takeaway

    Zapier is not competing to be the everything app. It’s becoming the action layer that makes every everything app actually functional—the 8,000-integration infrastructure that AI models plug into when they need to do real work in real systems.

    What’s Next in This Series

    This article closes the core competitive series on everything app contenders. But the conversation isn’t finished. Two threads we’ve opened in this series deserve their own deep dives: the xAI infrastructure pivot story—whether Elon Musk is quietly turning Colossus and X into the “everything app ability” rather than the everything app itself—and a Track 2 series on how to actually connect each of these platforms to a Notion everything database as your operational backbone.

    If you’ve been following this series from the beginning, you’ve seen the landscape of AI consolidation from nine different angles. The conclusion that keeps emerging: the everything app isn’t a product. It’s a position. And the race to own that position is just getting started.

    Frequently Asked Questions About Zapier and the Everything App

    What is Zapier’s current AI platform called?

    Zapier relaunched in 2025 as an AI Orchestration Platform. The platform includes Zapier Agents (autonomous AI teammates), Zapier Copilot (natural language workflow builder), Zapier Canvas (visual system map), Zapier Tables, Zapier Interfaces, Zapier Chatbots, and Zapier MCP (an integration server for external AI models). The foundational Zaps automation engine remains the core, with these AI products layered on top.

    What is Zapier MCP and why does it matter?

    Zapier MCP is a Model Context Protocol server that exposes Zapier’s entire integration library to external AI models. Any MCP-compatible AI—including Claude, GPT-4o, and others—can use Zapier MCP to take actions across the 8,000+ apps Zapier connects. This makes Zapier the action execution layer for AI systems built by other companies, not just for Zapier’s own agents. Zapier is an official Anthropic integration partner through this mechanism.

    How many apps does Zapier connect?

    As of 2026, Zapier connects over 8,000 apps. This integration library has been built and maintained over 13 years and represents Zapier’s primary competitive moat. No AI-first entrant has built a comparable breadth of authenticated, maintained app integrations.

    What are Zapier Agents?

    Zapier Agents are autonomous AI teammates that reason about goals rather than following fixed if-then sequences. Launched in January 2025 and reaching general availability in December 2025, Agents can browse the web, read data sources, update CRMs, draft communications, and delegate to other specialist agents through multi-agent orchestration. They’re configured with a role, a set of tool permissions, and a goal—then run autonomously within governance guardrails set by administrators.

    How does Zapier integrate with Notion?

    Zapier’s Notion integration supports database triggers, page creation and updates, and database queries. Zapier Agents can use these as tools in their reasoning loops, enabling autonomous workflows that read from and write to Notion databases. For teams using Notion as an everything database, Zapier provides the automation and agent execution layer that makes that data architecture operational across connected business apps.

    Is Zapier profitable?

    Yes. Zapier has been profitable for years and has raised minimal external funding since a $1.4 million seed round in 2012. Revenue reached $310 million in 2024 with projections near $400 million for 2025. This financial independence distinguishes Zapier from most AI platform companies and gives it patience to evolve its platform without fundraising pressure.

    What are Zapier’s AI governance features?

    Zapier offers enterprise AI governance through managed credentials, admin controls on which users and teams can create or deploy agents, approval workflows for high-risk actions, AI guardrails that bound what agents can do autonomously, and comprehensive audit logs of all agent activity. These controls were prominently featured in the February 2026 product update and represent Zapier’s push to make AI deployment safe for regulated enterprise environments.

    How does Zapier compare to Make (Integromat) and n8n?

    Make and n8n are Zapier’s primary competitors in workflow automation. Make offers more complex branching logic at competitive pricing. n8n is open-source and self-hostable, appealing to developers and privacy-conscious enterprises. Zapier differentiates on breadth of integrations, ease of use for non-technical users, and its newer AI layer (Agents, Copilot, MCP). For enterprises prioritizing AI orchestration with governance controls, Zapier’s platform depth currently leads. For developers wanting maximum flexibility or self-hosting, n8n is the primary alternative.

  • Is Mistral AI Building the Everything App? The Open-Source Path to AI Sovereignty

    What Is Mistral AI?
    Mistral AI is a Paris-based AI company founded in 2023 by former DeepMind and Meta researchers. It builds open-weight large language models—most notably Mistral Large 3, a 675-billion-parameter mixture-of-experts model—and an enterprise AI platform designed around data sovereignty, self-hosting, and zero vendor lock-in.

    Every company in this series has been racing toward the same destination: the everything app. Microsoft wants to embed AI into every workflow via Copilot. Google wants to connect every product through Gemini. OpenAI is building a unified memory layer. Perplexity is replacing the browser. Grok wants to own your social feed and financial life simultaneously.

    Mistral is doing something different. Instead of building an everything app on top of your data, Mistral is handing you the infrastructure to own your own.

    That distinction is not a minor technical footnote. It may be the most important strategic bet in AI right now.

    📚 Everything App Series

    This is article 8 in our ongoing series examining which AI companies are building the everything app:

    The Open-Source Bet: Why It Matters for Everything Apps

    When we talk about everything apps in this series, we’re really talking about platform capture. The company that becomes your everything app owns your data, your workflows, and your switching costs. That’s the game Microsoft, Google, and OpenAI are all playing.

    Mistral is making a different calculation. By releasing its most capable models under the Apache 2.0 open-source license—including Mistral Large 3, currently ranked second on open-source leaderboards—Mistral is saying: the value isn’t in locking you in. It’s in being the model you trust enough to run on your own infrastructure.

    Mistral Large 3, released in December 2025, runs as a mixture-of-experts (MoE) architecture with 675 billion total parameters and 41 billion active parameters at any one time. This design means it achieves frontier-level performance while activating only a fraction of its capacity per inference—making it far more economical to self-host than a dense model of comparable size. It sits behind only GPT-4o and Gemini Ultra on public benchmarks, and it’s the only model at that tier you can legally run yourself without paying per token.

    For enterprises with sensitive data, regulated industries, or simply strong opinions about where their intellectual property lives, this is not a minor feature. It’s the whole product.

    Mistral’s Platform Stack: More Than a Model Provider

    The narrative that Mistral is “just a model company” became outdated in 2025. The company has been quietly building an enterprise AI platform with four deployment modes, an orchestration layer, and proprietary compute infrastructure.

    Mistral AI Studio

    Launched in October 2025, Mistral AI Studio is the company’s full-stack development environment for building AI applications. Developers can fine-tune models, build workflows, deploy APIs, and manage production workloads from a single interface. It positions Mistral as a builder platform, not just a model host.

    Mistral Workflows

    The Workflows orchestration layer allows enterprises to connect Mistral’s models to external tools, APIs, and data sources—creating multi-step AI pipelines that can read from databases, call third-party services, and write outputs back into business systems. This is Mistral’s answer to the agentic layer that OpenAI is building with Operator and that Microsoft is building with Copilot Studio.

    Four Deployment Modes

    Mistral’s enterprise offering comes in four configurations: hosted API (fastest deployment), cloud-on-your-VPC (data stays in your cloud), self-deploy (your own servers, full control), and enterprise self-deploy (airgapped, no external connections). This ladder of data control is deliberate. It lets a startup begin on hosted and migrate to fully isolated infrastructure as compliance requirements grow—without changing the model or the code.

    Voxtral: Audio Enters the Stack

    Released on March 23, 2026, Voxtral extends Mistral’s capabilities into voice and audio. The TTS and transcription models bring Mistral into conversations, customer service, and voice-driven interfaces—adding a dimension that text-only models can’t reach. Combined with the existing vision capabilities in Mistral Small 4, Mistral is quietly assembling a multimodal stack without much fanfare.

    Mistral Compute: Building the Sovereign Cloud

    The biggest signal that Mistral is thinking beyond model provider status is Mistral Compute—the company’s investment in proprietary AI infrastructure.

    In March 2026, Mistral raised $830 million in debt financing specifically to build a Paris data center. The facility will house 18,000 NVIDIA Grace Blackwell chips, powered in part by nuclear energy (France’s grid is approximately 70% nuclear). Mistral has committed to reaching 200 megawatts of compute capacity across Europe by 2027, with additional facilities planned in Sweden.

    Why does this matter for the everything app question? Because infrastructure is leverage. A company that owns its compute can offer pricing, latency, and data residency guarantees that a company renting from AWS or Azure simply cannot match. For European enterprises subject to GDPR, for governments, for defense contractors—those guarantees are the entire product.

    Mistral’s valuation reached $14 billion in April 2026, making it Europe’s most valuable AI company. Revenue has crossed $400 million ARR, with a $1 billion ARR target before the end of 2026. These are not the numbers of a research lab. They are the numbers of a platform company.

    Sovereign AI: The Strategic Frame That Changes Everything

    To understand Mistral’s everything app thesis, you need to understand what “sovereign AI” actually means in practice.

    Every other company in this series is building toward a future where AI capability lives in their cloud, trained on data that flows through their systems. Mistral’s sovereign AI frame inverts this entirely: capability should live in your infrastructure, trained on your data, under your legal jurisdiction.

    This isn’t just marketing. Mistral has built concrete products around this thesis. Mistral Defense is a NATO-approved deployment of Mistral’s models designed specifically for military and intelligence applications that cannot touch commercial cloud infrastructure. Mistral GovCloud provides European governments with models that never leave EU jurisdiction. The Apache 2.0 license on core models means any organization can inspect, audit, and modify the weights—a requirement for many government and critical infrastructure deployments.

    For the everything app question, this creates an entirely different vision: instead of becoming a platform that centralizes your data and workflows, Mistral is offering to become the AI substrate that runs everywhere, including places the American hyperscalers can never reach.

    The Mistral Everything Database Integration

    Earlier in this series, we explored the concept of Notion as an “everything database”—an agnostic data layer that any AI interface can query, write to, and reason over. Mistral’s architecture is unusually well-suited to this model, for one specific reason: self-hosted models can make local API calls.

    When you run GPT-4o or Gemini, your data leaves your infrastructure to reach the model. When you run Mistral Large 3 on your own servers, the model and the data can coexist in the same environment. Your Notion workspace, your CRM, your internal documentation, your proprietary datasets—these can all be connected to a self-hosted Mistral instance without a single byte leaving your network perimeter.

    For teams building on top of a Notion everything database, this means you can configure Mistral Workflows to read from Notion’s API, process that data entirely on-premise, and write structured outputs back to Notion—no external AI provider ever seeing your business intelligence. That’s a capability that no hosted-only model can offer, regardless of their privacy policies.

    The integration pattern looks something like this: Notion stores your structured business data. A Mistral Workflow agent queries the Notion API for relevant context. Mistral Large 3, running on your own infrastructure or in a VPC, processes the query. The output writes back to Notion or triggers downstream actions. The only data that ever touched an external server is the Notion API call itself—and even that can be eliminated if you run Notion on-premise or use a self-hosted Notion alternative.

    The Leanstral Angle: AI That Can Prove Itself

    One of the most underreported developments at Mistral is Leanstral—the company’s work on formal proof engineering with AI. Lean is a theorem proving language used in mathematics and high-assurance software development. Leanstral fine-tunes Mistral models to write and verify formal proofs, which means the model can, in principle, prove that its outputs are correct.

    This matters beyond academic mathematics. Formal verification is the gold standard for safety-critical software—avionics, medical devices, financial systems. If Mistral can extend formal verification capabilities to AI-generated code and reasoning chains, it creates an entirely new category of trustworthy AI deployment in regulated industries. That’s a moat that an open-source API provider simply cannot build, because it requires deep expertise in formal methods, not just scale.

    Where Mistral Falls Short of the Everything App Vision

    Mistral’s open-source, sovereign AI thesis is compelling—but it carries real limitations in the everything app race.

    First, self-hosting requires infrastructure teams. The average knowledge worker or SMB cannot spin up a 675-billion-parameter model on their own servers. Mistral’s vision scales beautifully for enterprises and governments, but it doesn’t have an obvious answer for the consumer market where everything apps like WhatsApp and WeChat have historically dominated.

    Second, the consumer interface layer is underdeveloped. Mistral’s Le Chat assistant is a polished product, but it has not achieved the cultural adoption that ChatGPT or Perplexity has. Building an everything app requires habitual daily use, and habit formation requires network effects that are hard to manufacture from an enterprise-first strategy.

    Third, everything apps historically win by owning a distribution channel: messaging (WeChat), search (Google), email (Gmail). Mistral doesn’t own a consumer distribution channel. It is building infrastructure that sits beneath distribution channels, which is a strong B2B play but a challenging consumer play.

    The irony is that Mistral’s greatest strength—you can run this anywhere, including off the internet—is also what limits its ability to create the sticky, connected, always-on experience that defines an everything app for consumers.

    The Verdict: Infrastructure Layer, Not Interface Layer

    Is Mistral building the everything app? Not in the way Microsoft, Google, or OpenAI are building it. Mistral is building something arguably more important: the AI infrastructure layer that could power any everything app.

    Think of it this way. The companies that built TCP/IP didn’t capture the value of the internet—the companies that built applications on top of TCP/IP did. Mistral’s bet is that open, sovereign AI infrastructure will become the TCP/IP of the AI era: foundational, everywhere, and not owned by any one application layer.

    If that bet lands, Mistral doesn’t need to be your everything app. It needs to be inside every everything app that matters in Europe, in government, in defense, and in any enterprise that takes data sovereignty seriously.

    With a $14 billion valuation, $830 million in new compute infrastructure, NATO-approved deployment, and the only frontier-class model you can legally self-host, Mistral is not playing the same game as its American competitors. It’s playing a longer one.

    The next article in this series looks at Zapier—the workflow automation company now building its own AI layer on top of 7,000 app integrations. If Mistral is the sovereign infrastructure play, Zapier may be the most quietly dangerous connector play in this entire landscape.

    Key Takeaway

    Mistral is not competing to be your everything app. It’s competing to be the AI layer that runs inside every sovereign, regulated, or privacy-sensitive everything app—the one place American hyperscalers cannot follow.

    Frequently Asked Questions About Mistral AI and the Everything App

    What is Mistral AI’s current flagship model?

    As of mid-2026, Mistral’s flagship is Mistral Large 3, released in December 2025. It uses a mixture-of-experts architecture with 675 billion total parameters (41 billion active per inference) and is released under the Apache 2.0 open-source license. It ranks second on open-source model leaderboards behind only proprietary frontier models.

    How does Mistral differ from OpenAI or Google in its AI strategy?

    Mistral’s core differentiator is data sovereignty and open-source licensing. While OpenAI and Google operate closed, hosted models where your data passes through their infrastructure, Mistral offers self-hosted deployment options where the model runs entirely within your own network perimeter. The Apache 2.0 license means organizations can inspect, modify, and redistribute model weights without licensing restrictions.

    What is Mistral Compute and why is it significant?

    Mistral Compute is the company’s investment in proprietary AI infrastructure. The $830 million debt raise in March 2026 funds a Paris data center with 18,000 NVIDIA Grace Blackwell chips, targeting 200MW of European AI compute capacity by 2027. Owning compute allows Mistral to offer pricing guarantees, EU data residency compliance, and latency performance that cloud-renting competitors cannot match.

    Can Mistral models integrate with Notion?

    Yes. Self-hosted Mistral deployments can connect to Notion’s REST API and process data without routing it through any external AI provider. Mistral Workflows, the company’s orchestration layer, supports API integrations that can read from and write to Notion databases. This makes Mistral particularly well-suited for teams using Notion as an everything database who need on-premise AI processing.

    What is Mistral Defense?

    Mistral Defense is a NATO-approved deployment configuration of Mistral’s AI models designed for military, intelligence, and critical infrastructure use cases that cannot use commercial cloud infrastructure. It represents one of the first frontier AI models certified for sovereign defense applications, giving Mistral a market position that no American hyperscaler can easily replicate due to data residency and classification requirements.

    Is Mistral building a consumer everything app like ChatGPT?

    Mistral operates Le Chat, a consumer-facing AI assistant. However, Mistral’s primary strategic focus is enterprise and sovereign deployments rather than consumer market share. Unlike ChatGPT or Perplexity, Mistral has not pursued aggressive consumer distribution, instead prioritizing the enterprise, government, and defense segments where data sovereignty requirements give it a structural competitive advantage.

    What is Voxtral?

    Voxtral is Mistral’s text-to-speech and audio processing model released on March 23, 2026. It extends Mistral’s capabilities beyond text into voice interfaces, audio transcription, and conversational applications. Combined with vision capabilities in Mistral Small 4, Voxtral represents Mistral’s push toward a full multimodal stack.

    What is Leanstral?

    Leanstral is Mistral’s work on formal proof engineering—fine-tuning AI models to write and verify mathematical proofs using the Lean theorem proving language. Beyond academic mathematics, it positions Mistral for safety-critical software applications in avionics, medical devices, and financial systems where formal verification of AI outputs is a regulatory requirement.

  • Grok and xAI’s Everything App: The Most Vertically Integrated Bet in the Race

    Every other company in this series is building the everything app from a product. Elon Musk is building it from a thesis — and the thesis is that whoever controls the real-time pulse of human conversation, financial transactions, and AI reasoning simultaneously will own the operating system of public life. That’s an audacious bet. It’s also the most vertically integrated everything-app attempt in history.

    Where Grok/xAI Sits in This Series This is the seventh piece in our everything-app series. We’ve covered Microsoft, Google, Notion, the everything database frame, OpenAI, and Perplexity. Grok and xAI are the wildcard — the only player in this series where the everything app ambition is explicit, stated out loud, and backed by the most aggressive compute infrastructure build in history.

    The Structure First — Because It Changed Dramatically

    Before the product, the corporate structure — because it’s unlike anything else in tech and it matters for understanding the strategy.

    In March 2025, X (formerly Twitter) was merged into xAI. In February 2026, SpaceX acquired the combined xAI/X entity, creating a private conglomerate valued at $1.25 trillion. xAI had raised over $42 billion in total funding before that acquisition, including a $20 billion Series E at a $230 billion standalone valuation in January 2026.

    What that means practically: Grok now sits inside a single private entity that controls a social network with hundreds of millions of users (X), a rocket and satellite company with global connectivity infrastructure (SpaceX/Starlink), the world’s largest AI supercomputer (Colossus), and a financial services platform in active launch (X Money). No other AI company in this series has anything close to that vertical integration. Microsoft comes closest, but their stack was assembled through decades of acquisitions. This one was assembled in under three years.

    The Model Reality: Grok 3 and Grok 4

    Get the models right before the strategy discussion.

    Grok 3 launched February 17, 2025, trained on Colossus with 10x the compute of its predecessor using 200,000 NVIDIA H100 GPUs. Key specs: 128,000-token context window, 12.8 trillion tokens of training data. Benchmark performance: 93.3% on AIME 2025 mathematics, 84.6% on GPQA graduate-level reasoning, 79.4% on LiveCodeBench. DeepSearch (real-time internet analysis) and Big Brain Mode (extended reasoning for complex tasks) are the headline features.

    Grok 4 and Grok 4 Heavy launched July 9, 2025. Grok 4 is the single-agent flagship. Grok 4 Heavy is the multi-agent version — multiple Grok instances running in parallel, coordinating on complex tasks. This is xAI’s answer to Perplexity Computer’s 19-model orchestration: instead of routing across different providers, Grok 4 Heavy runs multiple instances of the same model in parallel, each handling a specialized subtask.

    The compute infrastructure behind these models is its own story. Colossus — xAI’s Memphis supercluster — now houses 555,000 NVIDIA GPUs (H100, H200, and GB200) at a cost of approximately $18 billion, with a 2-gigawatt power target and plans to expand past 1 million GPUs. Phase 1 was built in a record 122 days. In May 2026, SpaceX leased Colossus 1’s full capacity (over 300 megawatts, 220,000 GPUs) to Anthropic, with xAI’s own training workloads having migrated to the newer Colossus 2. Even the compute infrastructure is being monetized.

    X as the Everything App: What’s Actually Live

    Elon Musk has been talking about X as an everything app since the Twitter acquisition in 2022. In 2026, pieces of that vision are actually shipping.

    X Money launched in April 2026 — Musk’s most direct move into consumer financial services. It turns X into a platform where users handle payments, savings, and transfers without leaving the app. Grok is embedded as a native financial assistant, not bolted on. You don’t open a separate AI tool to ask about your spending. The AI is inside the financial layer, contextually aware of your transactions in real time.

    XChat launched as a standalone messaging app on April 17, 2026. Messaging, social, payments, AI reasoning, and real-time information all converging into one surface. The WeChat parallel is intentional — Musk has cited WeChat explicitly as the model.

    Grok inside X gives every X Premium and Premium+ user direct access to Grok’s reasoning, DeepSearch, and Big Brain Mode within the social feed. The AI isn’t a tab you switch to — it’s woven into the content experience. Ask about a tweet, get Grok’s analysis. Ask about a trending topic, get a cited deep-research answer. The social graph and the AI layer are collapsing into one interface.

    Grok Business and Enterprise tiers offer organizational use cases — higher limits, collaboration features, and a commitment that customer data won’t be used to train Grok’s models. Combined with a $200 million DoD contract ceiling and a GSA OneGov arrangement, xAI is also quietly building a federal business that none of the other companies in this series has pursued as aggressively.

    The Data Moat Nobody Else Has: Real-Time Human Behavior

    Here’s xAI’s structural advantage that’s genuinely different from every other player in this series.

    Microsoft has professional data — emails, calendars, documents, LinkedIn profiles. Google has search intent and Gmail. Notion has structured operational data. OpenAI has conversation history. Perplexity has research queries.

    X has something none of them have: real-time human opinion, reaction, and behavioral signal at scale. Every trending topic, every breaking news reaction, every public sentiment shift, every viral idea — it flows through X before it reaches anywhere else. Grok is trained on that data stream and has live access to it via DeepSearch.

    For an everything app, that’s a uniquely valuable data layer. Your financial assistant knowing what the market is reacting to in real time. Your research tool pulling from the live conversation, not a crawled index. Your AI having a pulse on what’s actually happening right now, not what happened 48 hours ago when a web crawler last visited a news site.

    No other AI company owns a real-time public information network. That’s not replicable through an API partnership or an acquisition. It’s structural.

    The Honest Problems: Trust, Brand, and Concentration Risk

    The xAI/Grok everything-app story has real structural strengths. It also has problems that are harder to dismiss than the weaknesses of other companies in this series.

    Brand trust is fractured. X’s post-acquisition turbulence — advertiser departures, content moderation controversies, perception issues — created a brand association problem for Grok that Perplexity, OpenAI, and Google don’t carry. Enterprise buyers who are cautious about the X association are a real constraint on Grok’s enterprise adoption curve, regardless of model quality.

    Concentration risk is extreme. The $1.25 trillion SpaceX/xAI/X entity is, by design, concentrated around one person’s decision-making. For businesses evaluating whether to build on Grok or integrate X Money into their operations, that concentration is a genuine risk factor. The Perplexity decision to drop ads for user trust took a company decision. The equivalent decisions at xAI take one person’s preference on any given day.

    The everything app for whom? X’s user demographics skew toward specific audiences — news, politics, finance, tech, sports. The WeChat model works because WeChat serves everyone in China from grandparents to businesses to governments. X serves a specific slice of global attention. Turning that into a universal everything app requires either dramatically expanding the user base or accepting that xAI’s everything app is vertical — powerful for certain use cases, irrelevant for others.

    The Colossus Wildcard: Compute as Strategy

    One angle on xAI that doesn’t fit cleanly into the everything-app frame but matters enormously: Colossus isn’t just infrastructure for Grok. It’s becoming a compute business in its own right.

    Leasing Colossus 1 to Anthropic in May 2026 generated revenue from a facility that’s already been built and paid for. If Colossus 2 and the planned 1 million GPU expansion continue on schedule, xAI has the potential to become the compute infrastructure provider for competitors it’s racing against — the same way Amazon AWS became the infrastructure for companies competing with Amazon’s retail business.

    That’s not an everything-app play. That’s a platform play at the infrastructure layer, and it’s one that compounds the valuation story regardless of whether Grok wins the consumer AI race.

    How Grok Connects to Your Notion Everything Database

    xAI’s public API gives developers access to Grok’s models — including Grok 4 — with tool use, code execution, and agent capabilities. The practical integration pattern for the everything-database architecture: use Grok via the xAI API for tasks where real-time X data matters. Competitive intelligence, social sentiment analysis, trending topic research, financial market reaction — these are the queries where Grok’s live X data access gives genuinely different answers than any other model.

    A Notion Worker fires a query to the xAI API, Grok runs DeepSearch against the live X data stream, and the structured result writes back to your Notion intelligence database. You’re not choosing between Grok and your Notion database — you’re using Grok for the specific queries where its real-time social data layer is the differentiator, and letting Notion hold the structured memory of what you learned.

    The everything database doesn’t care which model feeds it. It just cares that the data is structured, accurate, and current. For real-time social and financial signal, Grok is currently the best source available. That’s a specific, defensible use case in a broader multi-model architecture — which is exactly how you should think about every platform in this series.

    Frequently Asked Questions

    What is Grok 4 and how does it differ from Grok 3?

    Grok 4 launched July 9, 2025 in two versions: a single-agent flagship and Grok 4 Heavy, a multi-agent version that runs multiple Grok instances in parallel for complex workflows. Grok 3 (February 2025) was the reasoning breakthrough model trained on Colossus with 200,000 H100 GPUs. Grok 4 builds on that foundation with expanded agentic capabilities and the Heavy multi-agent architecture.

    What is Colossus and why does it matter?

    Colossus is xAI’s AI supercluster in Memphis, Tennessee — currently housing 555,000 NVIDIA GPUs (H100, H200, GB200) at approximately $18 billion in hardware cost, with a 2-gigawatt power target. Phase 1 was built in 122 days. In May 2026, SpaceX leased Colossus 1’s capacity to Anthropic, with xAI migrating to Colossus 2. It’s both the training infrastructure for Grok and an emerging compute business.

    What is X Money?

    X Money launched in April 2026 as X’s consumer financial services platform — payments, savings, and transfers inside the X app, with Grok embedded as a native financial AI assistant. It’s the clearest expression of Elon Musk’s stated vision to turn X into a WeChat-style everything app for Western markets.

    What makes Grok’s data advantage different from other AI models?

    Grok has live access to the X data stream — real-time human opinion, breaking news reactions, trending topics, and public sentiment at scale — via DeepSearch. No other AI model in this series owns a real-time public information network. This makes Grok uniquely valuable for queries where current social and financial signal matters more than historical data.

    How do you access Grok via API?

    xAI’s public API provides developer access to Grok models including Grok 4, with tool use, code execution, and advanced agent capabilities. Enterprise tiers (Grok Business and Grok Enterprise) offer higher limits and data privacy commitments. The API is available at docs.x.ai and supports standard REST integration patterns compatible with Notion Workers and Cloud Run trigger architectures.

  • Perplexity AI’s Everything App Bet: Trust Is the Moat Nobody Else Is Building

    Nobody expected the answer engine to build a browser. Nobody expected the search startup to drop advertising entirely to protect user trust. Nobody expected a company founded in 2022 to reach a $21 billion valuation in 30 months. Perplexity AI is the everything-app candidate nobody saw coming — and their path is unlike any other company in this series.

    Where Perplexity Sits in This Series This is the sixth piece in our everything-app series. We’ve covered Microsoft, Google, Notion, the everything database frame, and OpenAI. Perplexity is the dark horse — smaller than all of them, faster-moving than most, and making bets that the incumbents aren’t willing to make.

    The Numbers Nobody Expected

    Start with the trajectory because it reframes everything else. Perplexity was valued at $121 million in April 2023. By early 2026 that number is $21.2 billion — a roughly 175x increase in 30 months. Total funding raised exceeds $1.5 billion, from Nvidia, Jeff Bezos, SoftBank, IVP, Accel, and Databricks. Monthly active users crossed 45 million. The company is processing 170 million global visitors per month. ARR climbed from $35 million in mid-2024 to over $450 million annualized by March 2026.

    Those aren’t hype numbers. ARR of $450M annualized on 45M users, with 800% year-over-year growth, signals genuine product-market fit. People are paying for this. Repeatedly. That matters for the everything-app thesis in a way that a free-tier user count doesn’t.

    The Trust Bet That Changes the Game

    In February 2026, Perplexity made a decision that every other company in this series should take note of: they dropped advertising entirely and moved to a subscription-first model. The stated reason was simple — leadership said the move was intended to preserve user trust in the answer engine, prioritizing objective results over ad revenue.

    Think about what that means as a strategic signal. Google’s entire business model is advertising. Microsoft’s Bing is ad-supported. Every other search surface is optimized, at least partially, for ad revenue. Perplexity looked at that landscape and decided that trust — verifiable, uncompromised trust in the answer — was worth more than ad dollars.

    For an everything app, that’s a profound differentiator. The everything app, by definition, will know more about you than any individual tool currently does. It will see your projects, your research, your questions, your habits. The company that earns the right to that level of access is the one that can credibly say: we are not monetizing your data or your attention. We are working for you.

    Perplexity made that bet explicitly. Nobody else has.

    What Perplexity Has Actually Built

    The product expansion from “AI search” to “everything app candidate” happened fast enough that most people are still thinking of Perplexity as a search box. Here’s what it actually is in mid-2026.

    Perplexity Computer — launched in early 2026 and available on the Max plan ($200/month) — is an autonomous agent that executes complex workflows on your behalf. It uses 19 different AI models, picks the best model for each step of a task, and creates subagents to handle parallel parts of a workflow simultaneously. That’s not a search enhancement. That’s an operating system for work — one that orchestrates multiple frontier models the way a conductor runs an orchestra, without asking you which instrument should play which note.

    Comet — Perplexity’s AI-native browser built on Chromium — launched on Windows and macOS in July 2025, came to iOS in March 2026, and is free on all platforms. It looks like Chrome. But it has an AI assistant built into every page — in-page research, page summarization, autonomous multi-step tasks. It books flights, manages email, fills forms, and translates pages automatically. Comet is the browser as an agent, not a browser with a chatbot bolted on the side.

    Deep Research and Model Council — available now — let you run three frontier models simultaneously, compare outputs, and synthesize a higher-confidence answer. Deep Research is powered in part by Claude Opus 4.6 — Anthropic’s previous flagship model, accessed through Perplexity’s $750M Microsoft Azure commitment which gives them access to OpenAI, Anthropic, and xAI systems. (Note: Anthropic’s current flagship as of April 2026 is Claude Opus 4.7, with Claude Mythos Preview beyond that — Perplexity’s model routing will update as newer versions become available through the Azure pipeline.) Model Council is the first mainstream consumer feature that makes multi-model reasoning accessible without requiring you to run models yourself.

    Perplexity Connectors let users search across linked file systems — Google Drive natively — for answers that pull from both cloud files and the live web. This is the beginning of the enterprise data layer: Perplexity as a unified search surface across your internal knowledge and the public internet simultaneously.

    Commerce integration with PayPal in conversational search means Perplexity has a purchase flow built into the answer layer. You don’t search for a product, click through to a store, and buy it there. You ask, get an answer with citations, and complete the purchase in the same conversational thread. Amazon took 20 years to get search and commerce this close together. Perplexity did it in three.

    The 19-Model Architecture: Why This Is Different

    The Perplexity Computer’s 19-model architecture deserves its own section because it represents a genuinely different philosophy from every other everything-app candidate.

    Microsoft runs Copilot on OpenAI’s models. Google runs Workspace on Gemini. OpenAI runs ChatGPT on GPT-5.5. Notion runs on Claude. Each company has picked a model family and is building their everything app around it. There’s logic to this — it simplifies the architecture, creates pricing leverage, and ensures consistency.

    Perplexity’s bet is the opposite: model neutrality. They use the best model for each task, from whichever provider produces it. Need deep reasoning? Pick o3. Need fast synthesis? Pick Claude Flash. Need computer use? Pick GPT-5.5 Operator. The user doesn’t choose and doesn’t need to know. The system routes to the best tool automatically.

    This is the “everything database” principle applied to models instead of data. Instead of betting on one model family, Perplexity is building the orchestration layer above all of them. If a new model from Mistral or xAI or any other provider becomes best-in-class for a specific task, Perplexity can route to it without rebuilding their product. The platform compounds regardless of which model wins any individual benchmark.

    The Honest Weakness: No Data Moat, No OS, No Inbox

    Perplexity doesn’t own an operating system. They don’t own an email platform. They don’t have a professional network. Their Connectors are real but limited compared to the native data access Microsoft and Google have by default. Their 45 million users, while impressive for a three-year-old company, is dwarfed by ChatGPT’s 500 million and Google’s three billion Workspace users.

    The $750M Azure commitment — while providing access to frontier models — also creates a dependency that model-owning competitors don’t have. If Microsoft decides Azure pricing changes, or if access to specific models is restricted, Perplexity’s multi-model architecture gets more expensive and more fragile simultaneously.

    The Max plan at $200/month for Perplexity Computer is expensive for what it is relative to alternatives. Enterprise adoption at 11% of organizations using generative AI is real but still a minority position. The path from answer engine to everything app requires trust-building and behavioral habit formation at a scale Perplexity hasn’t yet demonstrated for enterprise workloads.

    Why Perplexity Might Win Anyway

    Here’s the contrarian case, and it’s more credible than it sounds.

    The everything app that wins will be the one people trust with their most important questions. Not their files — their questions. The difference between a search engine and an everything app is that an everything app is the place you go when you genuinely don’t know what to do next. When you’re trying to figure out a business problem. When you need to research something critical. When you’re making a decision that matters.

    Perplexity is building specifically for that moment. Cited answers, not generated hallucinations. Subscription trust, not ad-influenced results. Multi-model consensus through Model Council, not single-model confidence. Deep Research for the questions that take hours, not seconds. They are optimizing for the highest-stakes use cases in knowledge work, not the highest-volume use cases.

    If your everything app is defined by “where I go when I need to know something important” — Perplexity has a credible claim on that moment that no other company in this series is directly competing for. Microsoft is competing for enterprise workflow. Google is competing for the native stack. OpenAI is competing for behavioral habit. Perplexity is competing for epistemic trust. That’s a different race.

    How Perplexity Connects to Your Notion Everything Database

    Perplexity’s Connectors currently support Google Drive natively, with more file system connections expanding through their enterprise roadmap. Via the Sonar API — Perplexity’s developer API for embedding answer-engine capabilities in external applications — you can build a bridge between Perplexity’s research layer and your Notion database structure.

    The practical architecture: Perplexity handles the live-web research and synthesis layer (the questions where you need current, cited, real-world information). Your Notion everything database stores the structured outputs — the decisions made, the research conclusions, the action items triggered. A Notion Worker fires the Perplexity query via the Sonar API, receives the response, and writes the structured result back to the relevant database row. Perplexity becomes your research engine. Notion becomes the memory that persists what you learned.

    That’s the hybrid that makes each tool better than it would be alone — and it’s the kind of architecture that only becomes possible when you stop asking which platform wins and start asking which platforms work best together.

    Frequently Asked Questions

    What is Perplexity Computer?

    Perplexity Computer is an autonomous AI agent launched in early 2026, available on the Max plan ($200/month). It uses 19 different AI models, routing each step of a task to the best available model and creating parallel subagents for complex workflows. It represents Perplexity’s most direct move toward an AI operating system for knowledge work.

    What is the Comet browser?

    Comet is Perplexity’s AI-native browser built on Chromium, launched on Windows and macOS in July 2025 and iOS in March 2026. It’s free on all platforms. It builds an AI assistant into every page — summarizing content, conducting in-page research, and executing multi-step tasks like booking flights, managing email, and filling forms autonomously.

    Why did Perplexity drop advertising?

    In February 2026, Perplexity discontinued its AI-integrated advertising strategy and moved to a subscription-first model. Leadership stated the decision was made to preserve user trust in the answer engine — prioritizing objective, uninfluenced results over ad revenue. This positions Perplexity as the only major AI search platform explicitly working for the user rather than for advertisers.

    What is Perplexity’s Model Council?

    Model Council lets users run three frontier AI models simultaneously, compare their outputs, and synthesize a higher-confidence answer. Combined with Deep Research (powered in part by Claude Opus 4.5/4.6 via Perplexity’s Azure access), it makes multi-model reasoning accessible without requiring users to choose or manage individual models.

    What is the Perplexity Sonar API?

    The Sonar API is Perplexity’s developer API for embedding answer-engine capabilities — cited, real-time web research — into external applications. It’s the integration layer for connecting Perplexity’s research capabilities to systems like Notion databases, CRMs, or custom workflows via Notion Workers or other trigger architectures.

  • OpenAI’s Everything App: Why Behavior Is a Better Moat Than Infrastructure

    Microsoft has LinkedIn and enterprise distribution. Google has the native stack. Notion has the database architecture. OpenAI has something none of them have: 500 million people who already open ChatGPT when they want to get something done. That’s not a product advantage. That’s a behavior advantage. And behavior is the hardest moat to breach.

    Where OpenAI Sits in This Series This is the fifth piece examining who builds the everything app. We’ve covered Microsoft, Google, Notion, and the everything database frame. OpenAI’s path is the most unusual: they’re not building from infrastructure up. They’re building from user behavior down.

    The Model Reality First — Get This Right

    Before the strategy discussion, the model facts — because the landscape shifted significantly in early 2026 and the marketing doesn’t always match what’s actually deployed.

    As of mid-2026, OpenAI’s current flagship is GPT-5.5, which powers ChatGPT Enterprise (unlimited messages) and is the reasoning backbone of the unified super-assistant experience. The o-series — o3 and o4-mini — are the thinking models, trained to reason longer before responding. o3 is the deep-reasoning flagship; o4-mini is the high-throughput option that outperforms o3-mini on non-STEM tasks and data science, with higher usage limits.

    Notably, GPT-4o, GPT-4.1, and GPT-4.1 mini were retired from ChatGPT as of February 13, 2026. Enterprise customers retained GPT-4o access until April 3, 2026. If you’re referencing these models in your stack — in tutorials, in documentation, in integrations — those references are now stale. The current tier is GPT-5.5 Instant / Thinking and the o3/o4-mini reasoning models.

    One more significant infrastructure move: the Assistants API is being deprecated, with sunset on August 26, 2026. OpenAI is replacing it with the Responses API — a new primitive that combines Chat Completions simplicity with Assistants-style tool use, supporting web search, file search, and computer use natively. If you built on the Assistants API, migration planning should already be underway.

    OpenAI’s Everything App Bet: Behavior Over Infrastructure

    Microsoft’s everything app bet is infrastructure — they own the OS, the enterprise software stack, and a professional network. Google’s bet is native stack — they own search, email, calendar, and mobile. Both are building from the platform up.

    OpenAI is doing the opposite. They’re starting from where people already go to get things done, and expanding outward from that behavioral beachhead. ChatGPT’s 500 million monthly users don’t use it because it owns their email. They use it because it’s the fastest path from question to answer, from idea to draft, from problem to solution.

    The everything app doesn’t have to own your data. It just has to be the place you go first. OpenAI is betting that if they can make ChatGPT good enough at enough things — and fast enough at integrating with the tools you already use — the behavioral habit becomes the moat. You stop going to Google first. You stop opening a new app. You open ChatGPT.

    The Pieces OpenAI Has Assembled

    The consolidation has been quieter than Microsoft’s marketing machine or Google’s Cloud Next announcements, but the pieces are substantial.

    Operator — the computer-using agent — launched as a research preview in early 2025 and integrated fully into ChatGPT by mid-year. It browses, clicks, fills forms, and manages logins autonomously. GPT-5.5’s score on OSWorld-Verified — the standard benchmark for computer-use agents — is 78.7%. The human baseline on the same benchmark is 72.4%. That’s not a lab result. That’s production-grade desktop and browser automation beating human performance on standardized tasks.

    Projects and Memory — launched through 2025 — give ChatGPT persistent context across sessions. Projects (November 2025) let you organize work by context. Project Memory (August 2025) lets ChatGPT learn your preferences, communication style, and working patterns over time. This is the foundational layer for the everything app: an AI that knows you, not just your current prompt.

    Workspace Agents for Enterprise — launched April 22, 2026 — let enterprise teams create, share, and manage AI agents for workflow automation. Powered by Codex, these agents handle reporting, coding, and messaging tasks autonomously. This is OpenAI’s direct enterprise play, competing with Microsoft’s Agent 365 and Google’s Workspace Studio on their home turf.

    Sora 2 — released September 2025 — moved AI video from novelty to production-grade. It’s available both as a standalone app and deeply integrated within ChatGPT. Video generation, image creation, voice, code execution, deep research, file analysis — all inside one interface. The surface area of what ChatGPT can do has expanded faster than most people have tracked.

    The Apps SDK and MCP support — announced in 2025 — let developers build UIs alongside MCP servers, defining both logic and interactive interface of applications that run inside ChatGPT. OpenAI is building a developer ecosystem where third-party tools surface inside ChatGPT natively, not as links out to other apps.

    The Honest Strategic Weakness: OpenAI Doesn’t Own the Data Layer

    Here’s the structural problem with OpenAI’s everything-app path that doesn’t get enough attention.

    Microsoft owns the calendar data, the email data, the document data, the professional network data. Google owns the same stack natively. Notion owns the database architecture where your operational data lives. OpenAI owns a conversation history and whatever files you’ve uploaded to Projects.

    That’s a meaningful gap. When you ask Microsoft Copilot “what happened in last week’s client meeting?” it can actually answer — because it has the calendar event, the Teams recording transcript, and the follow-up email thread. When you ask ChatGPT the same question, the answer is only as good as what you’ve explicitly provided.

    OpenAI’s answer to this is Operator and the connector ecosystem — let ChatGPT reach into your existing tools and pull the data it needs. That works, but it creates a dependency chain that Microsoft and Google don’t have. Every integration is a point of failure. Every API change is a breakage risk. Every permission prompt is friction that erodes the behavioral habit.

    The Responses API — replacing the Assistants API in August 2026 — is designed to close some of this gap with native web search, file search, and computer use built in. But native search is not the same as owning the inbox. And computer use, for all its benchmark performance, is still slower and less reliable than a dedicated integration.

    Where OpenAI Wins: The Consumer and Creator Layer

    The enterprise everything-app race may go to Microsoft or Google by default — too much infrastructure, too many IT relationships, too much compliance architecture for a newcomer to overcome in 18 months.

    But the consumer and creator layer is wide open. And that’s where OpenAI’s behavioral moat matters most.

    For freelancers, solopreneurs, content creators, small agencies, and knowledge workers who aren’t tied to an enterprise IT environment, ChatGPT is already the everything app. It drafts your emails, edits your copy, analyzes your data, generates your images, browses for research, and runs your automations. The question isn’t whether they’ll adopt it — they already have. The question is whether OpenAI deepens that relationship fast enough to make switching costly before Microsoft and Google catch up on the consumer side.

    Memory is the weapon here. The longer a user runs their work through ChatGPT Projects with memory enabled, the more context OpenAI accumulates about how that person thinks, works, and communicates. That context is genuinely hard to transfer to a competing platform. It’s not data in a database — it’s learned behavioral preference. The switching cost compounds with every session.

    The Operator Economy: OpenAI’s Wildcard

    The most underrated piece of OpenAI’s everything-app strategy isn’t ChatGPT itself — it’s the operator ecosystem.

    An “operator” in OpenAI’s framework is any business that deploys ChatGPT capabilities inside their own product. Every company building on the OpenAI API — embedding ChatGPT into their CRM, their help desk, their e-commerce platform, their internal tools — is an operator. Every one of those deployments is a surface where OpenAI’s models become the intelligence layer of someone else’s everything app.

    Microsoft has Copilot. Google has Gemini. But neither of them has the sheer number of third-party applications already running on their models that OpenAI has accumulated. The operator ecosystem means OpenAI doesn’t have to build every surface themselves. They just have to remain the model that operators trust most — and as long as GPT-5.5 and the o-series stay at the frontier of capability, that trust is relatively durable.

    The Workspace Agents launch, combined with the Apps SDK and MCP support, is OpenAI formalizing this operator model for enterprise. They’re saying: we won’t replace your enterprise software stack. We’ll become the reasoning layer that sits across all of it.

    What This Means for Your Stack Right Now

    If you’re building on OpenAI’s API or running workflows through ChatGPT, three immediate action items:

    • Audit your Assistants API usage now. August 26, 2026 sunset is closer than it looks. The Responses API migration path is documented — start the evaluation before you’re forced into a rushed migration.
    • Enable Projects and Memory for your team’s ChatGPT accounts. The compounding advantage of memory only builds if you start using it. Teams that have six months of Project memory by Q4 2026 will have a materially different AI experience than teams starting fresh.
    • Think about where ChatGPT sits relative to your Notion database. OpenAI’s operator model and MCP support mean ChatGPT can connect to your Notion everything database via the Notion Public API. The everything database frame doesn’t require you to choose between Notion and ChatGPT — it lets you use both, with Notion as the structured data layer and ChatGPT as the reasoning and action surface on top of it.

    The everything app race isn’t over. OpenAI has the behavior moat, the operator ecosystem, and the fastest-moving model roadmap of any company in this field. What they don’t have is the data infrastructure that Microsoft and Google own by default. How they close that gap — through connectors, through Operator’s computer-use capabilities, through the Responses API — will determine whether ChatGPT becomes the everything app or the everything layer sitting on top of someone else’s everything app.

    Both outcomes are valuable. Only one of them wins the race.

    Frequently Asked Questions

    What is OpenAI’s current flagship model in 2026?

    As of mid-2026, GPT-5.5 is OpenAI’s primary model powering ChatGPT Enterprise. The o3 and o4-mini models handle deep reasoning tasks. GPT-4o, GPT-4.1, and GPT-4.1 mini were retired from ChatGPT on February 13, 2026. The Assistants API sunsets August 26, 2026, being replaced by the Responses API.

    What is the OpenAI Responses API?

    The Responses API is OpenAI’s replacement for the Assistants API (sunset August 26, 2026). It combines Chat Completions simplicity with Assistants-style tool use, supporting built-in web search, file search, and computer use. It’s the new primitive for building agents on OpenAI’s platform.

    What are OpenAI Workspace Agents?

    Launched April 22, 2026, Workspace Agents let enterprise teams create, share, and manage AI agents for workflow automation inside ChatGPT. Powered by Codex, they handle reporting, coding, and messaging tasks autonomously — OpenAI’s direct enterprise play against Microsoft Agent 365 and Google Workspace Studio.

    How does ChatGPT Operator work?

    Operator is OpenAI’s computer-using agent — it browses, clicks, fills forms, and manages logins autonomously. GPT-5.5 scores 78.7% on the OSWorld-Verified benchmark for computer-use tasks, above the 72.4% human baseline. It’s integrated directly into the ChatGPT interface for eligible plans.

    Can ChatGPT connect to a Notion database?

    Yes. Via the Notion Public API and OpenAI’s MCP support and connector ecosystem, ChatGPT can read from and interact with Notion databases. This makes the “everything database” architecture viable with OpenAI as the reasoning surface — Notion holds the structured data, ChatGPT reasons and acts on it.

  • Notion Isn’t the Everything App. It’s the Everything Database — and That’s a Better Bet.

    Everyone is building the everything app. Microsoft wants to be yours. Google wants to be yours. Notion wants to be yours. But there’s a fourth path nobody is talking about — and it might be the smartest play for brands, agencies, and multi-system operators: don’t pick one everything app. Build one everything database, and let it feed all of them.

    The Core Idea Notion isn’t competing to be your everything app. It’s competing to be your everything database — the structured, queryable, agent-ready source of truth that sits underneath whatever surface you use. The everything app becomes interchangeable. The database is the moat.

    The Series So Far — and Why This Frame Changes Everything

    This is the fourth piece in a series examining who wins the everything-app race. We looked at Microsoft stitching together an everything app through acquisitions, Google trying to unify a native stack it keeps fragmenting, and Notion building from the database up. Each piece treated the everything app as the destination.

    But there’s a reframe worth making. What if the everything app isn’t the destination? What if the destination is the data layer underneath it — and the everything app is just whichever surface happens to be most useful at a given moment?

    That’s the angle that emerged from actually building inside Notion Workers alpha. And it changes the strategic calculus significantly for anyone running a brand, an agency, or a multi-system operation.

    Your Brand Doesn’t Need One Everything App. It Needs One Everything Database.

    Think about what an everything app actually requires to work. It needs to know your tasks. Your projects. Your contacts. Your content calendar. Your pipeline. Your team’s status. Your historical decisions. Your brand voice. Your client relationships. Your automation outputs.

    That’s not an app problem. That’s a data structure problem. And the company that solves the data structure problem — that gives you a clean, typed, queryable, agent-ready home for all of that — wins, regardless of which surface you use to view it.

    Notion’s database architecture is purpose-built for exactly this. Every property is typed. Every row is queryable. Every database can be filtered, sorted, related, and rolled up. When you build your brand’s operational data inside Notion — tasks with statuses, projects with owners, content with metadata, contacts with relationship history — you’re not just organizing. You’re building a structured intelligence layer that agents can read, write, and reason over reliably.

    That database doesn’t care which “everything app” sits on top of it. Microsoft Copilot can query it. Google Workspace agents can sync from it. Your own custom dashboard can read it via the Public API. Claude can operate on it directly. The surface is interchangeable. The database is the thing that compounds in value over time.

    The 30-Second Trigger: Where the Architecture Gets Interesting

    Here’s the piece that came out of our own Workers alpha experience — and it reframes the “30-second sandbox limitation” from a constraint into a feature.

    Notion Workers runs in a 30-second execution window. We hit that wall hard when we tried to move heavy automations — multi-site WordPress optimization passes, content pipelines, image generation — into Workers. Those are multi-minute jobs. They don’t fit.

    But 30 seconds is more than enough to do one specific thing: fire a signed HTTP POST to an external endpoint and return.

    That’s the architectural insight. You don’t use Notion Workers to execute heavy work. You use Notion Workers to trigger it. The Worker wakes up — on a schedule, on a database change, on a webhook — reads the relevant Notion database row, constructs a signed payload, fires a POST to a Google Cloud Run job, and exits. The whole thing takes under five seconds. Well within the 30-second window.

    Cloud Run picks up the job, runs for as long as it needs — minutes, not seconds — and when it’s done, it writes the results back to the Notion database via the Public API. The Notion database is now the job queue, the status tracker, the results store, and the orchestration log. All in one place. All queryable by agents.

    The pattern in practice:

    Notion Worker (cron / DB change / webhook)
      → reads Notion database row for job config
      → signs POST to Cloud Run endpoint
      → returns immediately (3–8 seconds, well under 30s)

    Cloud Run (no time limit)
      → runs heavy job (WP optimization, pipeline, image gen)
      → writes status + results back to Notion DB via Public API

    Notion Database
      → job queue / status tracker / results store / audit log
      → queryable by agents, visible to team, triggerable again

    This is the hybrid architecture we’re running. Our Tuesday 18-site WordPress SEO optimization pass runs on Cloud Run — not because Notion can’t orchestrate it, but because Notion does orchestrate it, as the database layer, while Cloud Run handles the execution. The Worker is the tickle. Cloud Run is the muscle. Notion is the brain that remembers everything.

    What “Brand Everything Database” Actually Means in Practice

    If you’re an agency, a media operation, or a multi-brand operator, here’s the concrete version of this architecture:

    • One Notion workspace as the brand OS. Every client, project, task, content piece, automation job, and decision lives as structured database rows. Not documents. Not folders. Typed, relational data.
    • Agents inside Notion prep the data. Custom agents compile status updates, flag stale work, surface blockers, build briefings — all operating on the Notion database directly. The “everything” data is always clean and current because agents are maintaining it continuously.
    • Workers trigger external execution. When a job needs more than 30 seconds — content pipelines, SEO runs, bulk operations — a Worker fires the trigger. Cloud Run executes. Results come back into Notion. The database stays the source of truth.
    • Any surface can consume it. A Copilot user can query the project database through Microsoft Graph connectors. A Google Workspace user can sync from Notion via the connector ecosystem. A custom dashboard can read the Notion API. The front end doesn’t matter. The database is always current.
    • External agents get full context. Through the External Agents API, Claude, Codex, or any agent you build can operate against your Notion databases with complete organizational context — not a generic AI, but one that knows your specific data, your specific projects, your specific brand.

    Why This Beats Betting on One Everything App

    The everything-app race has a winner-take-all framing that may be wrong. Here’s what we’ve observed from operating across Microsoft, Google, and Notion simultaneously:

    Different team members live in different surfaces. Your developer lives in GitHub and a terminal. Your account manager lives in Gmail. Your ops lead lives in a spreadsheet. Your creative lead lives in Figma. Forcing everyone onto one everything app means fighting human behavior, not working with it.

    But if everyone’s work — regardless of where they do it — writes back into a shared Notion database? The everything app problem disappears. You don’t need everyone in the same surface. You need everyone’s data in the same structure.

    That’s what Notion’s connector ecosystem is actually building toward. GitHub syncs into Notion. Jira syncs into Notion. Salesforce syncs into Notion. Slack syncs into Notion. The surface stays wherever it is. The intelligence layer centralizes.

    The Compounding Advantage

    Here’s the strategic reason this matters beyond the technical architecture: databases compound. Documents don’t.

    A Google Doc from two years ago is mostly dead weight — hard to search, hard to query, impossible for an agent to reason over reliably. A Notion database from two years ago is a living asset. Every row is still queryable. Every relationship still works. The history of every project, every decision, every outcome is structured data that an agent can analyze, pattern-match against, and use to inform current work.

    The longer you run your brand’s operations through a Notion database, the smarter your agents get — because they have more structured history to work with. That’s not true of any document-first system. And it’s not something you can easily replicate once a competitor has two years of structured operational data and you’re starting from scratch.

    The everything app you pick in 2026 matters less than the data structure you commit to in 2026. Pick the wrong everything app and you switch in 18 months. Pick the wrong data structure and you’re rebuilding from zero.

    The Practical Starting Point

    If this architecture makes sense for your operation, here’s how to think about the starting point:

    • Audit what data your business actually runs on. Tasks, projects, clients, content, pipelines, automations — map out what you’re currently tracking and where. How much of it is in documents? How much is in structured databases?
    • Pick the three databases that matter most and build them right in Notion. Don’t try to migrate everything at once. Start with your project tracker, your content calendar, and your client/contact database. Get those typed, relational, and agent-ready.
    • Connect one external source via Workers or the connector ecosystem. Slack, GitHub, Jira — pick the one that generates the most signal for your operation and get it syncing into Notion.
    • Build one Custom Agent that works on those databases. A status compiler, a blocker detector, a briefing builder — something that demonstrates the database-first advantage concretely to your team.
    • Then consider the trigger pattern. What jobs in your operation take longer than 30 seconds but could be triggered from a database change? Those are your first Cloud Run candidates, with Notion as the orchestration layer.

    The everything app race is real. But the more durable competitive advantage is the data structure underneath it. Build the database right, and the everything app becomes a detail.

    Frequently Asked Questions

    What is a “brand everything database” in Notion?

    A brand everything database is a Notion workspace architected as the structured, queryable source of truth for all of a brand’s operational data — tasks, projects, content, clients, automations, decisions. Unlike document-based systems, every piece of information is typed, relational, and agent-readable. External tools sync into it; agents operate on it; any surface can consume it via the Public API.

    How do Notion Workers act as triggers for Google Cloud Run?

    Notion Workers run in a 30-second sandbox — enough time to read a Notion database row, construct a signed payload, and fire an HTTP POST to a Cloud Run endpoint. The Worker returns immediately; Cloud Run handles the long-running execution (minutes, not seconds) and writes results back to the Notion database via the Public API. This makes Notion the orchestration and visibility layer without hitting the sandbox time limit.

    Why is a database-first architecture better than document-first for AI agents?

    Documents require AI to infer structure from prose — an error-prone process that degrades at scale. Database rows are typed, structured, and directly queryable. An agent asking “which projects are blocked this week?” gets an exact filter result from a Notion database in milliseconds; the same question against a folder of Google Docs produces a best-effort summary. Reliability and precision are the key differences.

    Can Notion databases feed Microsoft Copilot or Google Workspace agents?

    Yes, via connectors and the Notion Public API. Microsoft Graph connectors and Google Workspace connectors can sync from Notion databases. Custom agents built on the External Agents API can also read and write Notion data from any external platform. The Notion database becomes the shared source of truth regardless of which AI surface your team prefers.

    What’s the best first step to building a brand everything database in Notion?

    Start with three core databases: a project tracker, a content calendar, and a client/contact database. Get them typed with proper properties, linked relationally, and cleaned up. Then build one Custom Agent that operates on those databases — a status compiler or briefing builder. Once you’ve seen the database-first advantage in action, the architecture for connecting external tools and Cloud Run triggers becomes obvious.

  • Notion’s Database-First Bet: Why the Everything App Might Be Built on a Spreadsheet, Not a Document

    Microsoft is stitching together an everything app from acquisitions. Google is trying to unify a native stack it keeps fragmenting. Notion is doing something different — and arguably more interesting. It’s building the everything app from the database up, and it just made its most important move yet.

    Definition: The Database-First Everything App An AI-powered workspace where every piece of information — tasks, projects, docs, contacts, data — lives in a structured, queryable database, and agents can read, write, reason over, and act on that data autonomously. The database isn’t the backend. It’s the interface.

    Yesterday Changed Everything for Notion

    On May 13, 2026 — yesterday — Notion shipped version 3.5 and announced their full Developer Platform in a livestreamed product event. The tech press covered it as an AI agent story. They weren’t wrong, but they missed the bigger frame.

    Notion didn’t just add agents. They introduced a new primitive called Workers — a hosted runtime for custom code that lets teams extend Notion without running their own servers. Database sync, agent tools, and webhook triggers all run through Workers. They launched the External Agents API, allowing any agent — ones you built, or ones from Claude, Codex, Decagon, and other partners — to work natively inside your Notion workspace. And they opened a developer platform that lets teams connect AI agents, external data sources, and custom code directly into their workspace.

    Taken individually, these are nice product updates. Taken together, they’re an orchestration play. Notion is positioning itself not as a note-taker with AI features bolted on, but as the hub where people, agents, and data collaborate across every tool a team uses.

    The Database Advantage Nobody Else Has

    Here’s the thing that separates Notion from every other everything-app candidate — including Microsoft and Google.

    Both Microsoft 365 and Google Workspace are document-first platforms. Their fundamental unit of work is a file: a Word document, a Google Doc, a PowerPoint, a Sheet. Files are great for humans to read. They’re terrible for AI to reason over at scale. You can’t ask an AI agent to “find every project where the status is blocked and the deadline is this week” across a folder of Word documents and get a reliable answer.

    Notion’s fundamental unit is a database. Every page can be a database row. Every property is structured, queryable, filterable data. When Notion AI looks at your workspace, it doesn’t see a pile of documents — it sees a relational knowledge graph. Tasks have statuses. Projects have owners and deadlines. Contacts have properties. Everything is connected, typed, and queryable.

    That’s not a feature difference. That’s an architectural difference. And it’s why Notion’s agents can do things that Copilot and Gemini agents fundamentally struggle with: operate reliably on your actual organizational data, not summaries of your documents.

    The Agent Timeline: Faster Than Anyone Expected

    Notion’s agent rollout has moved at a pace that’s easy to underestimate if you haven’t been watching closely. Here’s the actual timeline:

    • September 18, 2025 — Notion 3.0: Agents. First AI agents launch. Autonomous data analysis and task automation. The starting gun.
    • January 20, 2026 — Notion 3.2. Mobile AI, new model support, people directory. Agents go everywhere, not just desktop.
    • February 24, 2026 — Notion 3.3: Custom Agents. Users can build their own agents from scratch. Over 21,000 custom agents built in the first free trial period alone. Notion reported 2,800 agents running 24/7 internally at Notion itself.
    • March 2026. Workers introduced in alpha — a TypeScript-based framework for agents to talk to any service with an API. The coding layer for power users.
    • April 14, 2026 — Notion 3.4. Calendar and inbox connectors. Notion AI can now schedule meetings and draft emails from inside your workspace.
    • May 5, 2026. Custom Agent admin controls for enterprise — workspace-level credit limits, governance tools, compliance features.
    • May 13, 2026 — Notion 3.5: Developer Platform. External Agents API, Workers out of alpha, database sync with no servers, full developer ecosystem launched.

    That’s eight months from first agent launch to full developer platform. For context, Microsoft spent years building Azure OpenAI integration before Copilot reached feature parity with what Notion shipped in less than a year.

    What the Notion Everything App Actually Looks Like Today

    This isn’t theoretical. Here’s what a team running on Notion can configure right now:

    • Your project data, always current. Databases synced from Slack, Google Drive, GitHub, Jira, Microsoft Teams, Salesforce, and Box — all flowing into Notion databases in real time, powered by Workers. No manual updates. No stale spreadsheets.
    • Agents watching your work. Custom agents triggered by database changes, schedules, or webhooks — compiling status updates, flagging blocked tasks, escalating overdue items, answering team FAQs.
    • Your inbox and calendar inside your workspace. Connect Gmail or Outlook and your calendar; Notion AI can schedule meetings and draft emails without leaving the tool your work already lives in.
    • External agents working in your context. Claude, Codex, Decagon — agents you’ve built yourself via the External Agents API — all operating against your Notion databases with full context. Not generic AI. AI that knows your specific data.
    • Plan Mode for complex operations. Before an agent makes large changes to your databases or pages, it stops, asks clarifying questions, and builds a plan for your approval. This is the governance layer that makes AI trustworthy in a business context.
    • Your institutional knowledge, always accessible. Every decision, every project history, every team document — structured and queryable by agents that can synthesize across your entire knowledge base on demand.

    The Model Behind It: Claude Opus 4.7

    Unlike Microsoft (Copilot runs on GPT-4o and Azure OpenAI) and Google (Gemini family), Notion is built on Anthropic’s Claude. As of the January 2026 update, Notion runs Claude Opus 4.7 — Anthropic’s most capable model at the time of release — for its AI features and agent reasoning.

    This is a strategic choice worth examining. Claude is specifically designed with a focus on reliability, honesty, and safe behavior in agentic contexts — qualities that matter enormously when an AI agent has write access to your company’s databases. Anthropic’s Constitutional AI training approach was built for exactly the kind of autonomous, long-running agent work that Notion is deploying.

    The Notion + Claude combination isn’t just a vendor relationship. It’s an architectural alignment: a database-first workspace built on a model specifically designed for trustworthy agentic behavior. That’s a more coherent stack than either Microsoft or Google has assembled, where the AI model and the productivity platform were developed independently and integrated later.

    Why “Database First” Beats “Document First” for the Everything App

    Let’s make this concrete with a comparison most teams will recognize.

    Ask Microsoft Copilot: “Which of our client projects are behind schedule this quarter?” Copilot will search your emails, scan your SharePoint documents, and produce a reasonable summary — but it’s reading prose, inferring structure, and hoping the documents are up to date. The answer is a best-effort synthesis, not a query result.

    Ask a Notion agent the same question: it runs a database filter. Status = Behind. Quarter = Q2 2026. It returns an exact list in under a second, with links to every project, the responsible person, and the last update — because that data is structured. The agent didn’t infer anything. It read typed data.

    That’s the difference between AI that helps you find things and AI that actually knows things. Notion’s database architecture is what makes the second kind possible at scale, without hallucination, without retrieval errors, without the AI making up a project that doesn’t exist.

    The Honest Weakness: The 30-Second Wall

    Here’s what you only learn by actually building inside the alpha — and we did.

    Notion Workers runs in a 30-second sandbox with 128MB of memory. Each Worker is created through the Notion control panel, taking 3–5 minutes to spin up. The network is limited to an approved domain allowlist. Storage is ephemeral — nothing persists between runs. These aren’t theoretical constraints. They’re the real walls you hit when you try to move serious automation workloads into Notion.

    We were in the Workers alpha. We built Workers. We set up custom agents. And we stress-tested the sandbox deliberately — forcing failures to find the exact break points, then running production workloads at 60% of the known ceiling as a stability rule. That’s the only honest way to operate inside a system this constrained: know where it breaks before you depend on it.

    What we found changed our architecture. Heavy automations — multi-site WordPress SEO optimization passes across 18 sites, content pipelines, image generation, WP-CLI batch operations — couldn’t live inside Notion Workers. They’re multi-minute jobs, not 30-second jobs. Moving them to Notion would have meant engineering workarounds that added complexity without adding reliability.

    So instead of moving Cowork automations into Notion as we originally planned, we moved them to Google Cloud Run. The notion-deep-extractor (crawls the workspace, extracts structured knowledge, logs to the Second Brain database — runs 3x daily) and the notion-maintenance bundle (archive sweeper, stale work detector, content guardian — runs daily at 6am UTC) all live on Cloud Run now, with Cowork scheduled tasks paused. The 18-site WordPress optimizer running Tuesday? Cloud Run. Not Notion.

    This isn’t a knock on Notion. It’s an architectural reality that every builder needs to understand before they commit workloads. The right pattern — the one we’re now using and that Notion’s own documentation points toward — is Notion Workers as the trigger layer, Cloud Run as the execution layer. A Worker fires a signed POST to a Cloud Run endpoint, returns immediately (well under 30 seconds), Cloud Run runs the heavy job, then writes results back to a Notion database via the Public API. You get Notion as the orchestration and visibility layer without hitting the sandbox wall.

    That hybrid is genuinely powerful. But it requires infrastructure that most small teams don’t have. If you don’t have a Cloud Run setup, a service account, and the deployment knowledge to wire this together, the 30-second limit will stop you cold on anything more complex than a lightweight API call or a database update.

    Notion doesn’t own email. It connects to Gmail and Outlook. It doesn’t own a calendar — it integrates with yours. It doesn’t have a mobile OS or browser. Those gaps matter less than the sandbox constraint does for real production workloads. The everything app story is real — but the execution layer has hard limits that require a hybrid architecture to work around, at least until Workers matures beyond its current beta constraints.

    Who Should Be Paying Attention Right Now

    If you’re an agency, a service business, a content operation, or any knowledge-work team that already uses Notion — or has been considering it — the May 13 Developer Platform announcement changes your calculus significantly.

    Custom Agents are available as an add-on for Business and Enterprise plans. Workers are free during the current beta period (billing starts August 11, 2026). The External Agents API is open now. This is the window to build before your competitors do.

    The teams that spend the next 90 days wiring up their Notion databases, building their first custom agents, and connecting their external data sources will have a compounding advantage that’s very hard to replicate in 2027. The institutional knowledge that feeds these agents — the project histories, the SOPs, the client databases — takes time to build. Starting now is the only strategy that works.

    The Bigger Picture: A Series on Who Wins the Everything App

    This is the third article in an emerging pattern I’ve been thinking through: who actually builds the everything app, and what does their path look like?

    Microsoft is building it through acquisitions and Copilot, stitching together LinkedIn, Azure, and the M365 suite. Google already owns the native stack — Gmail, Drive, Search, Android — and is trying to unify it through Gemini Enterprise and Workspace Studio after years of product fragmentation. Notion is building it from the database up, betting that structured data plus open agents beats document-first platforms with AI bolted on.

    None of them has won yet. All three bets are live. The winner won’t be the company with the most features — it’ll be the one that earns enough trust to become the single place where your work actually lives.

    Notion’s database-first architecture is the most interesting bet of the three. It’s also the most fragile — dependent on integrations, constrained by not owning the OS or the inbox, limited by whatever Anthropic does with Claude pricing and capabilities. But if it works, it works in a way the others can’t easily copy. You can’t retrofit a database architecture onto a document platform. You have to start over.

    Microsoft and Google aren’t starting over. Notion never had to.

    Frequently Asked Questions

    What are Notion Custom Agents?

    Notion Custom Agents are AI teammates that handle repetitive tasks autonomously — answering FAQs, compiling status updates, automating workflows — triggered by schedules, database changes, or webhooks. They launched in February 2026 (Notion 3.3) and are available as an add-on for Business and Enterprise plans. Over 21,000 were built during the free trial period alone.

    What is Notion Workers?

    Notion Workers is a hosted cloud runtime for custom TypeScript code, introduced in alpha in March 2026 and fully launched with the Developer Platform on May 13, 2026. It powers database sync, agent tools, and webhook triggers — letting teams extend Notion to connect any service with an API, without running their own servers. Workers are free during the beta period through August 10, 2026.

    What AI model does Notion use?

    Notion runs on Anthropic’s Claude — specifically Claude Opus 4.7 as of the January 2026 update. This is different from Microsoft Copilot (which uses OpenAI’s GPT models) and Google Workspace (which uses the Gemini family). Notion’s choice of Claude reflects an emphasis on reliable, safe agentic behavior for workflows that have write access to business databases.

    What is the Notion External Agents API?

    The External Agents API, launched with Notion 3.5 on May 13, 2026, lets teams bring any AI agent — including ones built internally or from partners like Claude, Codex, and Decagon — directly into their Notion workspace. These external agents can read and write to Notion databases with full context about the team’s data.

    How is Notion different from Microsoft Copilot and Google Workspace AI?

    Notion is database-first. Every piece of information in Notion is structured, typed, and queryable data — not documents. This means Notion agents can run precise database queries against your actual organizational data rather than inferring structure from prose documents. For teams that need AI to reliably operate on business data (not just search and summarize), this architectural difference is significant.

    What are the real limitations of Notion Workers in the alpha?

    Notion Workers runs in a 30-second sandbox with 128MB of memory and ephemeral storage. Network access is limited to an approved domain allowlist. Workers are created via the Notion control panel (3–5 minutes each). Long-running jobs — content pipelines, multi-site operations, image generation — won’t fit. The recommended pattern for serious workloads is Notion Workers as the trigger layer firing a signed POST to an external execution environment (like Google Cloud Run), with results written back to Notion databases via the Public API.

  • Google Already Has the Everything App. The Question Is Whether They’ll Actually Build It.

    Microsoft gets credit for the “everything app” conversation because of Copilot’s marketing reach. But Google has quietly assembled something more complete, more native, and arguably more dangerous to every other productivity platform on earth — and most people haven’t connected the dots yet.

    Definition: Google’s “Everything Stack” The convergence of Google Workspace, Agentspace, Workspace Studio, NotebookLM, Google Search, Gmail, Calendar, Drive, Maps, Android, and the Gemini model family into a single AI-unified operating environment — where agents connect your data, automate your work, and surface what matters, without switching apps.

    Google Didn’t Need to Acquire Its Way Here

    Microsoft’s path to the everything app runs through acquisitions: LinkedIn ($26.2B), GitHub ($7.5B), Activision ($68.7B), and years of stitching Azure, Teams, and Bing into a coherent story. It’s impressive. It’s also fundamentally a construction project — building a unified platform out of parts that weren’t designed to work together.

    Google already owns the pieces natively. Gmail. Google Calendar. Google Drive. Google Docs, Sheets, and Slides. Google Search. Google Maps. Android. Chrome. YouTube. These aren’t acquisitions bolted onto a platform — they’re the platform. Over three billion people use Google Workspace tools. That install base isn’t a future bet; it’s the present reality.

    The question was never whether Google had the ingredients. The question was whether they’d ever actually bake the cake. In 2026, they finally are.

    What Google Just Shipped: The Pieces Coming Together

    At Google Cloud Next 2026, Google made moves that deserve more attention than they got.

    Workspace Studio launched to all Google Workspace domains on March 19, 2026. It’s the place to create, manage, and share AI agents that automate work across Workspace — no coding required. An end user can describe what they want in plain language (“every Friday, ping me to update my tracker”) and Gemini builds the agent. That’s not a developer feature. That’s a feature for your office manager, your sales coordinator, your operations lead.

    Workspace Intelligence is the connective tissue underneath. It’s a secure, dynamic system that understands the semantic relationships between your Docs, Slides, Gmail threads, active projects, collaborators, and your organization’s institutional knowledge — all in real time. Not indexed. Not cached. Live.

    Google Agentspace (now absorbed into the unified Gemini Enterprise Agent Platform as of Cloud Next 2026) brings together Gemini’s reasoning, Google-quality search, and enterprise data regardless of where it lives. Agents can connect to Google Drive, NotebookLM, and Google Group Chats and become an expert on a specific topic — delivering daily briefings, status updates, and research synthesis without anyone digging through months of documents.

    NotebookLM — Google’s AI research and synthesis tool — is now available as an out-of-the-box agent in Agentspace for enterprise users, with podcast-style audio summaries, enhanced privacy controls, and direct integration into the agent ecosystem. It’s the knowledge layer sitting on top of everything else.

    The AI Control Center, announced in May 2026 in the Admin console, gives IT and enterprise organizations visibility and governance over every agent and AI interaction touching Workspace data. For regulated industries, this is the feature that unlocks the whole stack.

    The Model Reality: Get This Right Before You Strategize

    Any honest conversation about Google’s AI strategy has to be anchored in what the models actually are — because the capabilities are moving fast and the marketing often lags the reality.

    As of mid-2026, Google’s current model family looks like this:

    • Gemini 3.1 Pro — Released February 19, 2026. The most capable model in the family. Scores 77.1% on ARC-AGI-2. Optimized for complex multi-step agentic workflows. This is the model powering the high-stakes enterprise use cases.
    • Gemini 2.5 Pro — The previous flagship, announced at Google I/O 2025. Still widely deployed in Vertex AI for enterprise. Excellent reasoning, very long context window.
    • Gemini 2.5 Flash — The speed/cost-efficiency model. Default model in the Gemini app. Generally available in Google AI Studio and Vertex AI. This is what most Workspace automation runs on day-to-day.
    • Gemini 2.5 Flash-Lite — The lightest, cheapest tier. For high-volume, low-complexity tasks like classification, routing, and summarization at scale.

    The architecture matters for strategy: Gemini 3.1 Pro handles reasoning-heavy agent tasks (complex research, multi-step decisions, agentic workflows), while Flash handles the volume work (daily digests, routine automation, quick lookups). The tiered model family is what makes an everything-app architecture economically viable — you don’t run your email summarizer on your most expensive model.

    What Google’s Everything Page Actually Looks Like Today

    Here’s what’s possible right now — not as a concept, but as actual configured Workspace behavior:

    • Your Gmail digest — Gemini in Gmail surfaces key threads, drafts replies, and flags action items before you open your inbox
    • Your Calendar intelligence — Meeting briefs pulled from your Drive documents, recent email threads with attendees, and relevant Docs — surfaced automatically before each event
    • Your Drive knowledge — NotebookLM agents synthesizing your team’s documents, project histories, and institutional knowledge into on-demand briefings
    • Your automation outputs — Workspace Studio agents running on schedule, pinging updates, moving data between Sheets and Docs, reporting on triggers
    • Your search layer — Google Search and Workspace Intelligence working together to answer business questions against both your internal data and the public web
    • Your news and signals — Gemini Enterprise surfacing industry news, competitor moves, and relevant content as part of a unified daily briefing

    The difference between this and the Microsoft vision is subtle but important: Google’s version requires almost no new infrastructure for most organizations. If you’re already on Google Workspace — and three billion people are — the agent layer sits on top of what you already use. The friction is configuration, not adoption.

    The Tension: Google’s Biggest Competitor Is Google’s Own Fragmentation

    Here’s where the opinion part comes in, because the facts alone don’t tell the whole story.

    Google has a well-documented history of building extraordinary tools and then failing to unify them. Google+. Google Wave. Google Inbox. Allo. Hangouts. The graveyard of Google products that almost became the everything app is long and sobering. The pattern is consistent: build something brilliant, run it in parallel with five other things, confuse the market, and eventually kill it.

    The 2026 rebranding — consolidating Vertex AI and Agentspace into the Gemini Enterprise Agent Platform — is either the sign that Google has finally learned its lesson about fragmentation, or it’s another reorganization that will look different again in 18 months. The cynical read is that Google Cloud Next announcements have promised unification before.

    The optimistic read — and I lean toward this one — is that the Gemini model family gives Google something it never had before: a single coherent AI backbone that every product can be rebuilt around. When your search, your email, your documents, your agents, and your developer platform all run on the same model family with the same context and the same API surface, unification becomes an engineering problem rather than a product vision problem. Engineering problems get solved.

    The A2A Protocol: The Move Nobody Talked About Enough

    One of the quieter announcements at Cloud Next 2026 was the Agent-to-Agent (A2A) protocol — Google’s open standard for allowing AI agents to communicate with each other across platforms and vendors. This is strategically significant in a way that’s easy to miss.

    If A2A gains adoption, the everything page doesn’t have to be Google’s proprietary walled garden. Your Workspace agents could communicate with agents from other platforms — your CRM, your project management tool, your industry-specific software. Google becomes the orchestration layer rather than the only layer. That’s a smarter long-term play than trying to own everything, and it sidesteps the antitrust concern that the Microsoft everything-app vision runs into head-on.

    What This Means for SMBs and Content Creators Right Now

    If you’re a small business running on Google Workspace — and most are — the everything-app infrastructure is closer than you think, and cheaper than you assume.

    Workspace Studio is included in Business Standard and above. Gemini in Gmail and Docs is rolling out across plans. NotebookLM Business is available as an add-on. The agent layer is not a future enterprise-only feature — it’s arriving in the same tools you’re already paying for.

    The businesses that will win the next three years are the ones that start treating their Google Workspace as an agent platform right now — connecting their data, building their automations, and training their teams to work alongside AI rather than around it.

    The everything page isn’t a product launch you wait for. It’s a configuration decision you make today.

    Google vs. Microsoft: Who Wins the Everything App Race?

    Honest answer: it’s not a race with one winner. The enterprise world will bifurcate along existing tool allegiances. Microsoft 365 shops will get their everything page through Copilot and Agent 365. Google Workspace shops will get theirs through Gemini Enterprise and Workspace Studio. The cold-start problem — who do you trust with all your connected data — will be solved by whoever already has your accounts.

    What’s different about Google’s position is the consumer crossover. Microsoft dominates enterprise desktops but has marginal consumer presence. Google lives on both sides — the same Gemini that runs your enterprise agent also runs in your personal Gmail, your Android phone, your Google search bar. The everything page, for Google users, won’t feel like a new product. It’ll feel like the thing you already use, finally doing what you always wished it would.

    That’s a powerful distribution advantage. And it’s one Microsoft, for all its enterprise strength, can’t easily replicate.

    Frequently Asked Questions

    What is Google Workspace Studio?

    Google Workspace Studio is Google’s no-code AI agent builder, launched to all Workspace domains on March 19, 2026. It lets any user create, manage, and share AI agents that automate work across Gmail, Docs, Sheets, Drive, and other Workspace apps — without writing code. Users describe what they want in plain language and Gemini builds the agent.

    What is Google Agentspace?

    Google Agentspace (now unified into the Gemini Enterprise Agent Platform as of Cloud Next 2026) is Google’s enterprise AI agent environment. It combines Gemini’s reasoning, Google-quality search, and enterprise data across Drive, NotebookLM, and Group Chats to give employees AI agents that understand their organization’s specific knowledge.

    What is the latest Google Gemini model in 2026?

    As of mid-2026, Gemini 3.1 Pro (released February 19, 2026) is Google’s most capable model, scoring 77.1% on ARC-AGI-2 and optimized for complex agentic workflows. Gemini 2.5 Flash is the default model for most consumer and business Workspace use cases, balancing speed and cost efficiency.

    What is Google’s A2A protocol?

    Agent-to-Agent (A2A) is Google’s open standard for AI agents to communicate across platforms and vendors, announced at Cloud Next 2026. It allows Workspace agents to interoperate with agents from other tools and platforms, positioning Google as an orchestration layer rather than a closed ecosystem.

    Do small businesses have access to Google’s AI agent features?

    Yes. Workspace Studio and Gemini features are included in Business Standard and higher tiers. NotebookLM Business is available as an add-on. Most of the agent infrastructure is arriving in existing Workspace plans, not as separate enterprise-only products.

  • Microsoft’s Everything App: Is Copilot Building the Unified AI Dashboard Nobody Asked For (But Everyone Needs)?

    What if every email, calendar event, LinkedIn notification, health metric, automation log, and business dashboard you care about lived on one page — organized by AI, updated in real time, and actually useful? That’s not a fever dream. It may already be Microsoft’s plan. And if it isn’t, someone needs to build it fast.

    Definition: The “Everything App” A unified AI-powered platform that aggregates professional data, communications, scheduling, automation outputs, and personal metrics into a single intelligent interface — personalized per user and powered by connected APIs.

    The Observation That Started This

    A few days ago I noticed something odd: LinkedIn posts I was publishing were reformatting into blocks of plain text instead of keeping their intended structure. My own agents couldn’t scrape LinkedIn the way I wanted them to. Anti-AI friction was everywhere on the platform.

    Then it hit me: Microsoft owns LinkedIn. Microsoft owns Bing. Microsoft is betting billions on Copilot. What if the formatting weirdness, the scraping blocks, the structured data changes — what if those aren’t bugs? What if they’re features in a Beta program for AI information ingestion?

    Think about it differently. Imagine a Bing page — or a Copilot interface — that pulls in curated LinkedIn posts, your email threads, your calendar, your business process updates, your health watch data, your cloud automations, and your news feed. All of it, organized the way you think about your day. That’s not a stretch. That might be exactly where this is heading.

    Microsoft Is Already Building the Pieces

    Let’s be clear about what Microsoft has actually shipped and announced, because the pieces of this puzzle are already on the table.

    Microsoft 365 Copilot Wave 3 launched in early 2026 alongside Microsoft 365 E7: The Frontier Suite (generally available May 1, 2026). It combines productivity, identity, Copilot AI, and Agent 365 — a control plane for governing and scaling AI agents across an organization. The Agent 365 dashboard shows connections between agents, people, and data in real time. That’s not a search box. That’s an operational view of your entire professional world.

    Microsoft Graph is the connective tissue. It links LinkedIn professional data — profiles, company updates, job changes, content signals — directly into Copilot’s intelligence layer. When enterprise users ask Copilot about industry experts or companies, LinkedIn data feeds the answer. The integration is deeper than most people realize, and it’s been quietly expanding since Microsoft acquired LinkedIn for $26.2 billion in 2016.

    Bing web cards in Copilot Chat now deliver rich, expandable information cards for weather, stocks, sports, news, and more. It’s a small feature on paper. But it signals the visual direction: Copilot as a personalized front page, not a search box.

    The new Agenda view in Windows — announced at Ignite 2025 — shows a chronological list of upcoming events unified with Calendar, surfaced directly in the Notification Center. Microsoft is literally building a unified daily view into the operating system itself.

    Why the Western Super App Never Happened — Until Now

    WeChat has over 1.3 billion monthly active users and handles messaging, payments, e-commerce, government services, and mini-programs all in one place. Western companies have been trying and failing to replicate that for a decade.

    The reasons for failure are real: U.S. data privacy law, antitrust scrutiny, platform fragmentation, and deeply entrenched single-purpose apps (Slack for chat, Stripe for payments, Google Calendar for scheduling) made the super app strategy a dead end in the West.

    But AI changes the calculus. The old super app required you to rebuild every vertical inside one app. The new super app just needs one AI brain that can use everything outside it. You don’t need to own payments — you need Copilot to understand your Stripe data. You don’t need to own scheduling — you need Copilot to read your Google Calendar and act on it.

    As one analysis of the U.S. super app window put it: “The old super app was ‘one app with everything inside.’ The next super app might be ‘one AI brain that can use everything outside.’” Between 2025 and 2027, the U.S. enters what some analysts call its Super App window — a convergence of AI interfaces, behavioral compression, and digital sovereignty that’s distinctly Western in character.

    Microsoft is the only Western company with the asset stack to pull this off: an OS (Windows), a browser (Edge), a search engine (Bing), a professional network (LinkedIn), a productivity suite (Microsoft 365), a developer platform (GitHub + Azure), and now a unified AI layer (Copilot) stitching it all together.

    What the “Everything Page” Actually Looks Like

    Here’s the vision, stated plainly:

    • Your news — curated by AI based on your industry, interests, and saved searches
    • Your LinkedIn feed — surfaced selectively, not chronologically, based on what actually matters to your business goals
    • Your email digest — key threads, action items, follow-ups, flagged by AI before you even open your inbox
    • Your calendar — not just events, but prep briefs for each meeting pulled from your email, CRM, and LinkedIn history
    • Your automation outputs — Cloud Run jobs, Zapier logs, agent reports, anything your background systems are doing
    • Your health signals — fitness watch data, sleep scores, recovery metrics — not in a separate app, but contextualizing your day
    • Your business metrics — revenue, leads, content performance, wherever your data lives

    All of it on one page. All of it updated in real time. All of it organized by an AI that knows what you consider signal versus noise.

    That’s not sci-fi. The APIs for all of that exist today. The AI to synthesize it exists today. The missing piece is the will to build the page — and a platform with enough trust and install base to make it stick.

    The LinkedIn Angle Nobody Is Talking About

    Here’s where my original observation gets more interesting. Microsoft has spent years sitting on one of the richest professional datasets on earth and doing relatively little with it compared to what’s possible. LinkedIn has 1 billion+ members, decades of career graph data, company relationship maps, content engagement signals — and it feeds directly into Microsoft Graph.

    Now that Copilot is deeply embedded in enterprise environments, LinkedIn data isn’t just a social feature — it’s a professional intelligence layer. When your Copilot brief for a sales call surfaces that your prospect just changed jobs, posted about a pain point, or follows a competitor — that’s LinkedIn data flowing through Microsoft Graph into your daily workflow.

    The scraping friction I noticed? It makes more sense when you consider that Microsoft may be actively working to make LinkedIn data more valuable inside its own ecosystem rather than letting third-party agents extract it freely. They’re not blocking AI — they’re channeling it through Copilot.

    The Risk: Nobody Wants One Company Holding All of This

    It would be dishonest not to acknowledge the obvious counterargument: this is a massive concentration of data and influence in one company’s hands.

    The reason WeChat works in China is partly cultural and partly because the regulatory environment permits it. U.S. antitrust law, GDPR-aligned state privacy rules, and growing public skepticism about big tech data practices all push against a single unified everything app.

    Microsoft’s bet is that enterprise trust — built through compliance features, security architecture, and the corporate IT relationship — gives them the permission that consumer platforms like Meta or X never earned. It’s a reasonable bet. It’s also one that regulators will watch closely.

    If Microsoft Doesn’t Build It, Someone Will

    The technology is not the bottleneck. Any serious developer with access to the right APIs could build a personal everything page today. Connect your Gmail, your LinkedIn (to the extent the API allows), your calendar, your fitness data, your cloud automation logs, and your analytics tools. Build a UI that surfaces what matters. Add an AI layer to summarize and prioritize.

    The bottleneck is distribution, trust, and the cold-start problem — nobody wants to connect all their accounts to something they’ve never heard of. That’s why Microsoft wins this race if they choose to run it. They already have the accounts. They already have the trust relationships. Copilot is already installed in hundreds of millions of enterprise seats.

    But if they don’t move fast enough, or if they build it only for enterprise and ignore the small business and creator class — that’s an opening. A focused, privacy-first, SMB-oriented everything page, built on open APIs, with no data lock-in? That’s a product worth building.

    What This Means for Your Content and AI Strategy Right Now

    Whether or not Microsoft delivers the everything app in the next 18 months, the direction of travel is clear. Professional information is consolidating around AI interfaces. LinkedIn content is increasingly flowing into Copilot’s intelligence layer. Bing-based AI answers are pulling from structured, authoritative content.

    For businesses and content creators, that means:

    • Your LinkedIn presence is now AI training data. What you post, how you structure it, and what entities you’re associated with affects how Copilot describes you to enterprise users asking about your industry.
    • Your website content needs to be AI-readable. Structured data, clear entity signals, authoritative citations — these are no longer optional for AI search visibility.
    • Your automation stack is a competitive advantage. The businesses that have already connected their tools via APIs will be first in line when the everything page actually ships.

    The everything app isn’t coming. It’s arriving in pieces, quietly, through products you already use. The question is whether you’re positioned when the pieces snap together.

    Frequently Asked Questions

    Is Microsoft building an “everything app” like WeChat?

    Microsoft hasn’t announced a single “everything app” product, but the pieces — Copilot, Microsoft Graph, LinkedIn data integration, Agent 365, and Bing web cards — suggest a unified AI-powered dashboard is the strategic direction. Whether it arrives as one product or an ecosystem of connected tools remains to be seen.

    Why did Western super apps fail where WeChat succeeded?

    U.S. data privacy regulations, antitrust scrutiny, platform fragmentation, and deeply entrenched single-purpose apps all prevented a WeChat-style super app from emerging in the West. AI changes the equation by enabling one system to connect and synthesize data across many separate apps without needing to own them.

    How does LinkedIn data connect to Microsoft Copilot?

    Microsoft Graph links LinkedIn’s professional data — profiles, company updates, career changes, content signals — directly into Copilot’s intelligence layer. Enterprise Copilot users receive LinkedIn-informed context in sales briefings, meeting prep, and professional research queries.

    What is Microsoft 365 E7 and what does it include?

    Microsoft 365 E7 (The Frontier Suite, GA May 1, 2026) combines Microsoft 365 E5 for secure productivity, Entra Suite for identity and access, Microsoft 365 Copilot for AI-in-workflow, and Agent 365 as the control plane to govern and scale AI agents across an organization.

    What can small businesses do today to prepare for AI-unified platforms?

    Connect your tools via APIs now, optimize your LinkedIn presence for AI entity recognition, publish structured authoritative content for AI search visibility, and build automation stacks that produce clean data outputs — these investments compound in value as AI platforms consolidate professional information.