Tag: AI Productivity

  • Perplexity AI’s Everything App Bet: Trust Is the Moat Nobody Else Is Building

    Nobody expected the answer engine to build a browser. Nobody expected the search startup to drop advertising entirely to protect user trust. Nobody expected a company founded in 2022 to reach a $21 billion valuation in 30 months. Perplexity AI is the everything-app candidate nobody saw coming — and their path is unlike any other company in this series.

    Where Perplexity Sits in This Series This is the sixth piece in our everything-app series. We’ve covered Microsoft, Google, Notion, the everything database frame, and OpenAI. Perplexity is the dark horse — smaller than all of them, faster-moving than most, and making bets that the incumbents aren’t willing to make.

    The Numbers Nobody Expected

    Start with the trajectory because it reframes everything else. Perplexity was valued at $121 million in April 2023. By early 2026 that number is $21.2 billion — a roughly 175x increase in 30 months. Total funding raised exceeds $1.5 billion, from Nvidia, Jeff Bezos, SoftBank, IVP, Accel, and Databricks. Monthly active users crossed 45 million. The company is processing 170 million global visitors per month. ARR climbed from $35 million in mid-2024 to over $450 million annualized by March 2026.

    Those aren’t hype numbers. ARR of $450M annualized on 45M users, with 800% year-over-year growth, signals genuine product-market fit. People are paying for this. Repeatedly. That matters for the everything-app thesis in a way that a free-tier user count doesn’t.

    The Trust Bet That Changes the Game

    In February 2026, Perplexity made a decision that every other company in this series should take note of: they dropped advertising entirely and moved to a subscription-first model. The stated reason was simple — leadership said the move was intended to preserve user trust in the answer engine, prioritizing objective results over ad revenue.

    Think about what that means as a strategic signal. Google’s entire business model is advertising. Microsoft’s Bing is ad-supported. Every other search surface is optimized, at least partially, for ad revenue. Perplexity looked at that landscape and decided that trust — verifiable, uncompromised trust in the answer — was worth more than ad dollars.

    For an everything app, that’s a profound differentiator. The everything app, by definition, will know more about you than any individual tool currently does. It will see your projects, your research, your questions, your habits. The company that earns the right to that level of access is the one that can credibly say: we are not monetizing your data or your attention. We are working for you.

    Perplexity made that bet explicitly. Nobody else has.

    What Perplexity Has Actually Built

    The product expansion from “AI search” to “everything app candidate” happened fast enough that most people are still thinking of Perplexity as a search box. Here’s what it actually is in mid-2026.

    Perplexity Computer — launched in early 2026 and available on the Max plan ($200/month) — is an autonomous agent that executes complex workflows on your behalf. It uses 19 different AI models, picks the best model for each step of a task, and creates subagents to handle parallel parts of a workflow simultaneously. That’s not a search enhancement. That’s an operating system for work — one that orchestrates multiple frontier models the way a conductor runs an orchestra, without asking you which instrument should play which note.

    Comet — Perplexity’s AI-native browser built on Chromium — launched on Windows and macOS in July 2025, came to iOS in March 2026, and is free on all platforms. It looks like Chrome. But it has an AI assistant built into every page — in-page research, page summarization, autonomous multi-step tasks. It books flights, manages email, fills forms, and translates pages automatically. Comet is the browser as an agent, not a browser with a chatbot bolted on the side.

    Deep Research and Model Council — available now — let you run three frontier models simultaneously, compare outputs, and synthesize a higher-confidence answer. Deep Research is powered in part by Claude Opus 4.6 — Anthropic’s previous flagship model, accessed through Perplexity’s $750M Microsoft Azure commitment which gives them access to OpenAI, Anthropic, and xAI systems. (Note: Anthropic’s current flagship as of April 2026 is Claude Opus 4.7, with Claude Mythos Preview beyond that — Perplexity’s model routing will update as newer versions become available through the Azure pipeline.) Model Council is the first mainstream consumer feature that makes multi-model reasoning accessible without requiring you to run models yourself.

    Perplexity Connectors let users search across linked file systems — Google Drive natively — for answers that pull from both cloud files and the live web. This is the beginning of the enterprise data layer: Perplexity as a unified search surface across your internal knowledge and the public internet simultaneously.

    Commerce integration with PayPal in conversational search means Perplexity has a purchase flow built into the answer layer. You don’t search for a product, click through to a store, and buy it there. You ask, get an answer with citations, and complete the purchase in the same conversational thread. Amazon took 20 years to get search and commerce this close together. Perplexity did it in three.

    The 19-Model Architecture: Why This Is Different

    The Perplexity Computer’s 19-model architecture deserves its own section because it represents a genuinely different philosophy from every other everything-app candidate.

    Microsoft runs Copilot on OpenAI’s models. Google runs Workspace on Gemini. OpenAI runs ChatGPT on GPT-5.5. Notion runs on Claude. Each company has picked a model family and is building their everything app around it. There’s logic to this — it simplifies the architecture, creates pricing leverage, and ensures consistency.

    Perplexity’s bet is the opposite: model neutrality. They use the best model for each task, from whichever provider produces it. Need deep reasoning? Pick o3. Need fast synthesis? Pick Claude Flash. Need computer use? Pick GPT-5.5 Operator. The user doesn’t choose and doesn’t need to know. The system routes to the best tool automatically.

    This is the “everything database” principle applied to models instead of data. Instead of betting on one model family, Perplexity is building the orchestration layer above all of them. If a new model from Mistral or xAI or any other provider becomes best-in-class for a specific task, Perplexity can route to it without rebuilding their product. The platform compounds regardless of which model wins any individual benchmark.

    The Honest Weakness: No Data Moat, No OS, No Inbox

    Perplexity doesn’t own an operating system. They don’t own an email platform. They don’t have a professional network. Their Connectors are real but limited compared to the native data access Microsoft and Google have by default. Their 45 million users, while impressive for a three-year-old company, is dwarfed by ChatGPT’s 500 million and Google’s three billion Workspace users.

    The $750M Azure commitment — while providing access to frontier models — also creates a dependency that model-owning competitors don’t have. If Microsoft decides Azure pricing changes, or if access to specific models is restricted, Perplexity’s multi-model architecture gets more expensive and more fragile simultaneously.

    The Max plan at $200/month for Perplexity Computer is expensive for what it is relative to alternatives. Enterprise adoption at 11% of organizations using generative AI is real but still a minority position. The path from answer engine to everything app requires trust-building and behavioral habit formation at a scale Perplexity hasn’t yet demonstrated for enterprise workloads.

    Why Perplexity Might Win Anyway

    Here’s the contrarian case, and it’s more credible than it sounds.

    The everything app that wins will be the one people trust with their most important questions. Not their files — their questions. The difference between a search engine and an everything app is that an everything app is the place you go when you genuinely don’t know what to do next. When you’re trying to figure out a business problem. When you need to research something critical. When you’re making a decision that matters.

    Perplexity is building specifically for that moment. Cited answers, not generated hallucinations. Subscription trust, not ad-influenced results. Multi-model consensus through Model Council, not single-model confidence. Deep Research for the questions that take hours, not seconds. They are optimizing for the highest-stakes use cases in knowledge work, not the highest-volume use cases.

    If your everything app is defined by “where I go when I need to know something important” — Perplexity has a credible claim on that moment that no other company in this series is directly competing for. Microsoft is competing for enterprise workflow. Google is competing for the native stack. OpenAI is competing for behavioral habit. Perplexity is competing for epistemic trust. That’s a different race.

    How Perplexity Connects to Your Notion Everything Database

    Perplexity’s Connectors currently support Google Drive natively, with more file system connections expanding through their enterprise roadmap. Via the Sonar API — Perplexity’s developer API for embedding answer-engine capabilities in external applications — you can build a bridge between Perplexity’s research layer and your Notion database structure.

    The practical architecture: Perplexity handles the live-web research and synthesis layer (the questions where you need current, cited, real-world information). Your Notion everything database stores the structured outputs — the decisions made, the research conclusions, the action items triggered. A Notion Worker fires the Perplexity query via the Sonar API, receives the response, and writes the structured result back to the relevant database row. Perplexity becomes your research engine. Notion becomes the memory that persists what you learned.

    That’s the hybrid that makes each tool better than it would be alone — and it’s the kind of architecture that only becomes possible when you stop asking which platform wins and start asking which platforms work best together.

    Frequently Asked Questions

    What is Perplexity Computer?

    Perplexity Computer is an autonomous AI agent launched in early 2026, available on the Max plan ($200/month). It uses 19 different AI models, routing each step of a task to the best available model and creating parallel subagents for complex workflows. It represents Perplexity’s most direct move toward an AI operating system for knowledge work.

    What is the Comet browser?

    Comet is Perplexity’s AI-native browser built on Chromium, launched on Windows and macOS in July 2025 and iOS in March 2026. It’s free on all platforms. It builds an AI assistant into every page — summarizing content, conducting in-page research, and executing multi-step tasks like booking flights, managing email, and filling forms autonomously.

    Why did Perplexity drop advertising?

    In February 2026, Perplexity discontinued its AI-integrated advertising strategy and moved to a subscription-first model. Leadership stated the decision was made to preserve user trust in the answer engine — prioritizing objective, uninfluenced results over ad revenue. This positions Perplexity as the only major AI search platform explicitly working for the user rather than for advertisers.

    What is Perplexity’s Model Council?

    Model Council lets users run three frontier AI models simultaneously, compare their outputs, and synthesize a higher-confidence answer. Combined with Deep Research (powered in part by Claude Opus 4.5/4.6 via Perplexity’s Azure access), it makes multi-model reasoning accessible without requiring users to choose or manage individual models.

    What is the Perplexity Sonar API?

    The Sonar API is Perplexity’s developer API for embedding answer-engine capabilities — cited, real-time web research — into external applications. It’s the integration layer for connecting Perplexity’s research capabilities to systems like Notion databases, CRMs, or custom workflows via Notion Workers or other trigger architectures.

  • Notion’s Database-First Bet: Why the Everything App Might Be Built on a Spreadsheet, Not a Document

    Microsoft is stitching together an everything app from acquisitions. Google is trying to unify a native stack it keeps fragmenting. Notion is doing something different — and arguably more interesting. It’s building the everything app from the database up, and it just made its most important move yet.

    Definition: The Database-First Everything App An AI-powered workspace where every piece of information — tasks, projects, docs, contacts, data — lives in a structured, queryable database, and agents can read, write, reason over, and act on that data autonomously. The database isn’t the backend. It’s the interface.

    Yesterday Changed Everything for Notion

    On May 13, 2026 — yesterday — Notion shipped version 3.5 and announced their full Developer Platform in a livestreamed product event. The tech press covered it as an AI agent story. They weren’t wrong, but they missed the bigger frame.

    Notion didn’t just add agents. They introduced a new primitive called Workers — a hosted runtime for custom code that lets teams extend Notion without running their own servers. Database sync, agent tools, and webhook triggers all run through Workers. They launched the External Agents API, allowing any agent — ones you built, or ones from Claude, Codex, Decagon, and other partners — to work natively inside your Notion workspace. And they opened a developer platform that lets teams connect AI agents, external data sources, and custom code directly into their workspace.

    Taken individually, these are nice product updates. Taken together, they’re an orchestration play. Notion is positioning itself not as a note-taker with AI features bolted on, but as the hub where people, agents, and data collaborate across every tool a team uses.

    The Database Advantage Nobody Else Has

    Here’s the thing that separates Notion from every other everything-app candidate — including Microsoft and Google.

    Both Microsoft 365 and Google Workspace are document-first platforms. Their fundamental unit of work is a file: a Word document, a Google Doc, a PowerPoint, a Sheet. Files are great for humans to read. They’re terrible for AI to reason over at scale. You can’t ask an AI agent to “find every project where the status is blocked and the deadline is this week” across a folder of Word documents and get a reliable answer.

    Notion’s fundamental unit is a database. Every page can be a database row. Every property is structured, queryable, filterable data. When Notion AI looks at your workspace, it doesn’t see a pile of documents — it sees a relational knowledge graph. Tasks have statuses. Projects have owners and deadlines. Contacts have properties. Everything is connected, typed, and queryable.

    That’s not a feature difference. That’s an architectural difference. And it’s why Notion’s agents can do things that Copilot and Gemini agents fundamentally struggle with: operate reliably on your actual organizational data, not summaries of your documents.

    The Agent Timeline: Faster Than Anyone Expected

    Notion’s agent rollout has moved at a pace that’s easy to underestimate if you haven’t been watching closely. Here’s the actual timeline:

    • September 18, 2025 — Notion 3.0: Agents. First AI agents launch. Autonomous data analysis and task automation. The starting gun.
    • January 20, 2026 — Notion 3.2. Mobile AI, new model support, people directory. Agents go everywhere, not just desktop.
    • February 24, 2026 — Notion 3.3: Custom Agents. Users can build their own agents from scratch. Over 21,000 custom agents built in the first free trial period alone. Notion reported 2,800 agents running 24/7 internally at Notion itself.
    • March 2026. Workers introduced in alpha — a TypeScript-based framework for agents to talk to any service with an API. The coding layer for power users.
    • April 14, 2026 — Notion 3.4. Calendar and inbox connectors. Notion AI can now schedule meetings and draft emails from inside your workspace.
    • May 5, 2026. Custom Agent admin controls for enterprise — workspace-level credit limits, governance tools, compliance features.
    • May 13, 2026 — Notion 3.5: Developer Platform. External Agents API, Workers out of alpha, database sync with no servers, full developer ecosystem launched.

    That’s eight months from first agent launch to full developer platform. For context, Microsoft spent years building Azure OpenAI integration before Copilot reached feature parity with what Notion shipped in less than a year.

    What the Notion Everything App Actually Looks Like Today

    This isn’t theoretical. Here’s what a team running on Notion can configure right now:

    • Your project data, always current. Databases synced from Slack, Google Drive, GitHub, Jira, Microsoft Teams, Salesforce, and Box — all flowing into Notion databases in real time, powered by Workers. No manual updates. No stale spreadsheets.
    • Agents watching your work. Custom agents triggered by database changes, schedules, or webhooks — compiling status updates, flagging blocked tasks, escalating overdue items, answering team FAQs.
    • Your inbox and calendar inside your workspace. Connect Gmail or Outlook and your calendar; Notion AI can schedule meetings and draft emails without leaving the tool your work already lives in.
    • External agents working in your context. Claude, Codex, Decagon — agents you’ve built yourself via the External Agents API — all operating against your Notion databases with full context. Not generic AI. AI that knows your specific data.
    • Plan Mode for complex operations. Before an agent makes large changes to your databases or pages, it stops, asks clarifying questions, and builds a plan for your approval. This is the governance layer that makes AI trustworthy in a business context.
    • Your institutional knowledge, always accessible. Every decision, every project history, every team document — structured and queryable by agents that can synthesize across your entire knowledge base on demand.

    The Model Behind It: Claude Opus 4.7

    Unlike Microsoft (Copilot runs on GPT-4o and Azure OpenAI) and Google (Gemini family), Notion is built on Anthropic’s Claude. As of the January 2026 update, Notion runs Claude Opus 4.7 — Anthropic’s most capable model at the time of release — for its AI features and agent reasoning.

    This is a strategic choice worth examining. Claude is specifically designed with a focus on reliability, honesty, and safe behavior in agentic contexts — qualities that matter enormously when an AI agent has write access to your company’s databases. Anthropic’s Constitutional AI training approach was built for exactly the kind of autonomous, long-running agent work that Notion is deploying.

    The Notion + Claude combination isn’t just a vendor relationship. It’s an architectural alignment: a database-first workspace built on a model specifically designed for trustworthy agentic behavior. That’s a more coherent stack than either Microsoft or Google has assembled, where the AI model and the productivity platform were developed independently and integrated later.

    Why “Database First” Beats “Document First” for the Everything App

    Let’s make this concrete with a comparison most teams will recognize.

    Ask Microsoft Copilot: “Which of our client projects are behind schedule this quarter?” Copilot will search your emails, scan your SharePoint documents, and produce a reasonable summary — but it’s reading prose, inferring structure, and hoping the documents are up to date. The answer is a best-effort synthesis, not a query result.

    Ask a Notion agent the same question: it runs a database filter. Status = Behind. Quarter = Q2 2026. It returns an exact list in under a second, with links to every project, the responsible person, and the last update — because that data is structured. The agent didn’t infer anything. It read typed data.

    That’s the difference between AI that helps you find things and AI that actually knows things. Notion’s database architecture is what makes the second kind possible at scale, without hallucination, without retrieval errors, without the AI making up a project that doesn’t exist.

    The Honest Weakness: The 30-Second Wall

    Here’s what you only learn by actually building inside the alpha — and we did.

    Notion Workers runs in a 30-second sandbox with 128MB of memory. Each Worker is created through the Notion control panel, taking 3–5 minutes to spin up. The network is limited to an approved domain allowlist. Storage is ephemeral — nothing persists between runs. These aren’t theoretical constraints. They’re the real walls you hit when you try to move serious automation workloads into Notion.

    We were in the Workers alpha. We built Workers. We set up custom agents. And we stress-tested the sandbox deliberately — forcing failures to find the exact break points, then running production workloads at 60% of the known ceiling as a stability rule. That’s the only honest way to operate inside a system this constrained: know where it breaks before you depend on it.

    What we found changed our architecture. Heavy automations — multi-site WordPress SEO optimization passes across 18 sites, content pipelines, image generation, WP-CLI batch operations — couldn’t live inside Notion Workers. They’re multi-minute jobs, not 30-second jobs. Moving them to Notion would have meant engineering workarounds that added complexity without adding reliability.

    So instead of moving Cowork automations into Notion as we originally planned, we moved them to Google Cloud Run. The notion-deep-extractor (crawls the workspace, extracts structured knowledge, logs to the Second Brain database — runs 3x daily) and the notion-maintenance bundle (archive sweeper, stale work detector, content guardian — runs daily at 6am UTC) all live on Cloud Run now, with Cowork scheduled tasks paused. The 18-site WordPress optimizer running Tuesday? Cloud Run. Not Notion.

    This isn’t a knock on Notion. It’s an architectural reality that every builder needs to understand before they commit workloads. The right pattern — the one we’re now using and that Notion’s own documentation points toward — is Notion Workers as the trigger layer, Cloud Run as the execution layer. A Worker fires a signed POST to a Cloud Run endpoint, returns immediately (well under 30 seconds), Cloud Run runs the heavy job, then writes results back to a Notion database via the Public API. You get Notion as the orchestration and visibility layer without hitting the sandbox wall.

    That hybrid is genuinely powerful. But it requires infrastructure that most small teams don’t have. If you don’t have a Cloud Run setup, a service account, and the deployment knowledge to wire this together, the 30-second limit will stop you cold on anything more complex than a lightweight API call or a database update.

    Notion doesn’t own email. It connects to Gmail and Outlook. It doesn’t own a calendar — it integrates with yours. It doesn’t have a mobile OS or browser. Those gaps matter less than the sandbox constraint does for real production workloads. The everything app story is real — but the execution layer has hard limits that require a hybrid architecture to work around, at least until Workers matures beyond its current beta constraints.

    Who Should Be Paying Attention Right Now

    If you’re an agency, a service business, a content operation, or any knowledge-work team that already uses Notion — or has been considering it — the May 13 Developer Platform announcement changes your calculus significantly.

    Custom Agents are available as an add-on for Business and Enterprise plans. Workers are free during the current beta period (billing starts August 11, 2026). The External Agents API is open now. This is the window to build before your competitors do.

    The teams that spend the next 90 days wiring up their Notion databases, building their first custom agents, and connecting their external data sources will have a compounding advantage that’s very hard to replicate in 2027. The institutional knowledge that feeds these agents — the project histories, the SOPs, the client databases — takes time to build. Starting now is the only strategy that works.

    The Bigger Picture: A Series on Who Wins the Everything App

    This is the third article in an emerging pattern I’ve been thinking through: who actually builds the everything app, and what does their path look like?

    Microsoft is building it through acquisitions and Copilot, stitching together LinkedIn, Azure, and the M365 suite. Google already owns the native stack — Gmail, Drive, Search, Android — and is trying to unify it through Gemini Enterprise and Workspace Studio after years of product fragmentation. Notion is building it from the database up, betting that structured data plus open agents beats document-first platforms with AI bolted on.

    None of them has won yet. All three bets are live. The winner won’t be the company with the most features — it’ll be the one that earns enough trust to become the single place where your work actually lives.

    Notion’s database-first architecture is the most interesting bet of the three. It’s also the most fragile — dependent on integrations, constrained by not owning the OS or the inbox, limited by whatever Anthropic does with Claude pricing and capabilities. But if it works, it works in a way the others can’t easily copy. You can’t retrofit a database architecture onto a document platform. You have to start over.

    Microsoft and Google aren’t starting over. Notion never had to.

    Frequently Asked Questions

    What are Notion Custom Agents?

    Notion Custom Agents are AI teammates that handle repetitive tasks autonomously — answering FAQs, compiling status updates, automating workflows — triggered by schedules, database changes, or webhooks. They launched in February 2026 (Notion 3.3) and are available as an add-on for Business and Enterprise plans. Over 21,000 were built during the free trial period alone.

    What is Notion Workers?

    Notion Workers is a hosted cloud runtime for custom TypeScript code, introduced in alpha in March 2026 and fully launched with the Developer Platform on May 13, 2026. It powers database sync, agent tools, and webhook triggers — letting teams extend Notion to connect any service with an API, without running their own servers. Workers are free during the beta period through August 10, 2026.

    What AI model does Notion use?

    Notion runs on Anthropic’s Claude — specifically Claude Opus 4.7 as of the January 2026 update. This is different from Microsoft Copilot (which uses OpenAI’s GPT models) and Google Workspace (which uses the Gemini family). Notion’s choice of Claude reflects an emphasis on reliable, safe agentic behavior for workflows that have write access to business databases.

    What is the Notion External Agents API?

    The External Agents API, launched with Notion 3.5 on May 13, 2026, lets teams bring any AI agent — including ones built internally or from partners like Claude, Codex, and Decagon — directly into their Notion workspace. These external agents can read and write to Notion databases with full context about the team’s data.

    How is Notion different from Microsoft Copilot and Google Workspace AI?

    Notion is database-first. Every piece of information in Notion is structured, typed, and queryable data — not documents. This means Notion agents can run precise database queries against your actual organizational data rather than inferring structure from prose documents. For teams that need AI to reliably operate on business data (not just search and summarize), this architectural difference is significant.

    What are the real limitations of Notion Workers in the alpha?

    Notion Workers runs in a 30-second sandbox with 128MB of memory and ephemeral storage. Network access is limited to an approved domain allowlist. Workers are created via the Notion control panel (3–5 minutes each). Long-running jobs — content pipelines, multi-site operations, image generation — won’t fit. The recommended pattern for serious workloads is Notion Workers as the trigger layer firing a signed POST to an external execution environment (like Google Cloud Run), with results written back to Notion databases via the Public API.

  • Google Already Has the Everything App. The Question Is Whether They’ll Actually Build It.

    Microsoft gets credit for the “everything app” conversation because of Copilot’s marketing reach. But Google has quietly assembled something more complete, more native, and arguably more dangerous to every other productivity platform on earth — and most people haven’t connected the dots yet.

    Definition: Google’s “Everything Stack” The convergence of Google Workspace, Agentspace, Workspace Studio, NotebookLM, Google Search, Gmail, Calendar, Drive, Maps, Android, and the Gemini model family into a single AI-unified operating environment — where agents connect your data, automate your work, and surface what matters, without switching apps.

    Google Didn’t Need to Acquire Its Way Here

    Microsoft’s path to the everything app runs through acquisitions: LinkedIn ($26.2B), GitHub ($7.5B), Activision ($68.7B), and years of stitching Azure, Teams, and Bing into a coherent story. It’s impressive. It’s also fundamentally a construction project — building a unified platform out of parts that weren’t designed to work together.

    Google already owns the pieces natively. Gmail. Google Calendar. Google Drive. Google Docs, Sheets, and Slides. Google Search. Google Maps. Android. Chrome. YouTube. These aren’t acquisitions bolted onto a platform — they’re the platform. Over three billion people use Google Workspace tools. That install base isn’t a future bet; it’s the present reality.

    The question was never whether Google had the ingredients. The question was whether they’d ever actually bake the cake. In 2026, they finally are.

    What Google Just Shipped: The Pieces Coming Together

    At Google Cloud Next 2026, Google made moves that deserve more attention than they got.

    Workspace Studio launched to all Google Workspace domains on March 19, 2026. It’s the place to create, manage, and share AI agents that automate work across Workspace — no coding required. An end user can describe what they want in plain language (“every Friday, ping me to update my tracker”) and Gemini builds the agent. That’s not a developer feature. That’s a feature for your office manager, your sales coordinator, your operations lead.

    Workspace Intelligence is the connective tissue underneath. It’s a secure, dynamic system that understands the semantic relationships between your Docs, Slides, Gmail threads, active projects, collaborators, and your organization’s institutional knowledge — all in real time. Not indexed. Not cached. Live.

    Google Agentspace (now absorbed into the unified Gemini Enterprise Agent Platform as of Cloud Next 2026) brings together Gemini’s reasoning, Google-quality search, and enterprise data regardless of where it lives. Agents can connect to Google Drive, NotebookLM, and Google Group Chats and become an expert on a specific topic — delivering daily briefings, status updates, and research synthesis without anyone digging through months of documents.

    NotebookLM — Google’s AI research and synthesis tool — is now available as an out-of-the-box agent in Agentspace for enterprise users, with podcast-style audio summaries, enhanced privacy controls, and direct integration into the agent ecosystem. It’s the knowledge layer sitting on top of everything else.

    The AI Control Center, announced in May 2026 in the Admin console, gives IT and enterprise organizations visibility and governance over every agent and AI interaction touching Workspace data. For regulated industries, this is the feature that unlocks the whole stack.

    The Model Reality: Get This Right Before You Strategize

    Any honest conversation about Google’s AI strategy has to be anchored in what the models actually are — because the capabilities are moving fast and the marketing often lags the reality.

    As of mid-2026, Google’s current model family looks like this:

    • Gemini 3.1 Pro — Released February 19, 2026. The most capable model in the family. Scores 77.1% on ARC-AGI-2. Optimized for complex multi-step agentic workflows. This is the model powering the high-stakes enterprise use cases.
    • Gemini 2.5 Pro — The previous flagship, announced at Google I/O 2025. Still widely deployed in Vertex AI for enterprise. Excellent reasoning, very long context window.
    • Gemini 2.5 Flash — The speed/cost-efficiency model. Default model in the Gemini app. Generally available in Google AI Studio and Vertex AI. This is what most Workspace automation runs on day-to-day.
    • Gemini 2.5 Flash-Lite — The lightest, cheapest tier. For high-volume, low-complexity tasks like classification, routing, and summarization at scale.

    The architecture matters for strategy: Gemini 3.1 Pro handles reasoning-heavy agent tasks (complex research, multi-step decisions, agentic workflows), while Flash handles the volume work (daily digests, routine automation, quick lookups). The tiered model family is what makes an everything-app architecture economically viable — you don’t run your email summarizer on your most expensive model.

    What Google’s Everything Page Actually Looks Like Today

    Here’s what’s possible right now — not as a concept, but as actual configured Workspace behavior:

    • Your Gmail digest — Gemini in Gmail surfaces key threads, drafts replies, and flags action items before you open your inbox
    • Your Calendar intelligence — Meeting briefs pulled from your Drive documents, recent email threads with attendees, and relevant Docs — surfaced automatically before each event
    • Your Drive knowledge — NotebookLM agents synthesizing your team’s documents, project histories, and institutional knowledge into on-demand briefings
    • Your automation outputs — Workspace Studio agents running on schedule, pinging updates, moving data between Sheets and Docs, reporting on triggers
    • Your search layer — Google Search and Workspace Intelligence working together to answer business questions against both your internal data and the public web
    • Your news and signals — Gemini Enterprise surfacing industry news, competitor moves, and relevant content as part of a unified daily briefing

    The difference between this and the Microsoft vision is subtle but important: Google’s version requires almost no new infrastructure for most organizations. If you’re already on Google Workspace — and three billion people are — the agent layer sits on top of what you already use. The friction is configuration, not adoption.

    The Tension: Google’s Biggest Competitor Is Google’s Own Fragmentation

    Here’s where the opinion part comes in, because the facts alone don’t tell the whole story.

    Google has a well-documented history of building extraordinary tools and then failing to unify them. Google+. Google Wave. Google Inbox. Allo. Hangouts. The graveyard of Google products that almost became the everything app is long and sobering. The pattern is consistent: build something brilliant, run it in parallel with five other things, confuse the market, and eventually kill it.

    The 2026 rebranding — consolidating Vertex AI and Agentspace into the Gemini Enterprise Agent Platform — is either the sign that Google has finally learned its lesson about fragmentation, or it’s another reorganization that will look different again in 18 months. The cynical read is that Google Cloud Next announcements have promised unification before.

    The optimistic read — and I lean toward this one — is that the Gemini model family gives Google something it never had before: a single coherent AI backbone that every product can be rebuilt around. When your search, your email, your documents, your agents, and your developer platform all run on the same model family with the same context and the same API surface, unification becomes an engineering problem rather than a product vision problem. Engineering problems get solved.

    The A2A Protocol: The Move Nobody Talked About Enough

    One of the quieter announcements at Cloud Next 2026 was the Agent-to-Agent (A2A) protocol — Google’s open standard for allowing AI agents to communicate with each other across platforms and vendors. This is strategically significant in a way that’s easy to miss.

    If A2A gains adoption, the everything page doesn’t have to be Google’s proprietary walled garden. Your Workspace agents could communicate with agents from other platforms — your CRM, your project management tool, your industry-specific software. Google becomes the orchestration layer rather than the only layer. That’s a smarter long-term play than trying to own everything, and it sidesteps the antitrust concern that the Microsoft everything-app vision runs into head-on.

    What This Means for SMBs and Content Creators Right Now

    If you’re a small business running on Google Workspace — and most are — the everything-app infrastructure is closer than you think, and cheaper than you assume.

    Workspace Studio is included in Business Standard and above. Gemini in Gmail and Docs is rolling out across plans. NotebookLM Business is available as an add-on. The agent layer is not a future enterprise-only feature — it’s arriving in the same tools you’re already paying for.

    The businesses that will win the next three years are the ones that start treating their Google Workspace as an agent platform right now — connecting their data, building their automations, and training their teams to work alongside AI rather than around it.

    The everything page isn’t a product launch you wait for. It’s a configuration decision you make today.

    Google vs. Microsoft: Who Wins the Everything App Race?

    Honest answer: it’s not a race with one winner. The enterprise world will bifurcate along existing tool allegiances. Microsoft 365 shops will get their everything page through Copilot and Agent 365. Google Workspace shops will get theirs through Gemini Enterprise and Workspace Studio. The cold-start problem — who do you trust with all your connected data — will be solved by whoever already has your accounts.

    What’s different about Google’s position is the consumer crossover. Microsoft dominates enterprise desktops but has marginal consumer presence. Google lives on both sides — the same Gemini that runs your enterprise agent also runs in your personal Gmail, your Android phone, your Google search bar. The everything page, for Google users, won’t feel like a new product. It’ll feel like the thing you already use, finally doing what you always wished it would.

    That’s a powerful distribution advantage. And it’s one Microsoft, for all its enterprise strength, can’t easily replicate.

    Frequently Asked Questions

    What is Google Workspace Studio?

    Google Workspace Studio is Google’s no-code AI agent builder, launched to all Workspace domains on March 19, 2026. It lets any user create, manage, and share AI agents that automate work across Gmail, Docs, Sheets, Drive, and other Workspace apps — without writing code. Users describe what they want in plain language and Gemini builds the agent.

    What is Google Agentspace?

    Google Agentspace (now unified into the Gemini Enterprise Agent Platform as of Cloud Next 2026) is Google’s enterprise AI agent environment. It combines Gemini’s reasoning, Google-quality search, and enterprise data across Drive, NotebookLM, and Group Chats to give employees AI agents that understand their organization’s specific knowledge.

    What is the latest Google Gemini model in 2026?

    As of mid-2026, Gemini 3.1 Pro (released February 19, 2026) is Google’s most capable model, scoring 77.1% on ARC-AGI-2 and optimized for complex agentic workflows. Gemini 2.5 Flash is the default model for most consumer and business Workspace use cases, balancing speed and cost efficiency.

    What is Google’s A2A protocol?

    Agent-to-Agent (A2A) is Google’s open standard for AI agents to communicate across platforms and vendors, announced at Cloud Next 2026. It allows Workspace agents to interoperate with agents from other tools and platforms, positioning Google as an orchestration layer rather than a closed ecosystem.

    Do small businesses have access to Google’s AI agent features?

    Yes. Workspace Studio and Gemini features are included in Business Standard and higher tiers. NotebookLM Business is available as an add-on. Most of the agent infrastructure is arriving in existing Workspace plans, not as separate enterprise-only products.

  • Microsoft’s Everything App: Is Copilot Building the Unified AI Dashboard Nobody Asked For (But Everyone Needs)?

    What if every email, calendar event, LinkedIn notification, health metric, automation log, and business dashboard you care about lived on one page — organized by AI, updated in real time, and actually useful? That’s not a fever dream. It may already be Microsoft’s plan. And if it isn’t, someone needs to build it fast.

    Definition: The “Everything App” A unified AI-powered platform that aggregates professional data, communications, scheduling, automation outputs, and personal metrics into a single intelligent interface — personalized per user and powered by connected APIs.

    The Observation That Started This

    A few days ago I noticed something odd: LinkedIn posts I was publishing were reformatting into blocks of plain text instead of keeping their intended structure. My own agents couldn’t scrape LinkedIn the way I wanted them to. Anti-AI friction was everywhere on the platform.

    Then it hit me: Microsoft owns LinkedIn. Microsoft owns Bing. Microsoft is betting billions on Copilot. What if the formatting weirdness, the scraping blocks, the structured data changes — what if those aren’t bugs? What if they’re features in a Beta program for AI information ingestion?

    Think about it differently. Imagine a Bing page — or a Copilot interface — that pulls in curated LinkedIn posts, your email threads, your calendar, your business process updates, your health watch data, your cloud automations, and your news feed. All of it, organized the way you think about your day. That’s not a stretch. That might be exactly where this is heading.

    Microsoft Is Already Building the Pieces

    Let’s be clear about what Microsoft has actually shipped and announced, because the pieces of this puzzle are already on the table.

    Microsoft 365 Copilot Wave 3 launched in early 2026 alongside Microsoft 365 E7: The Frontier Suite (generally available May 1, 2026). It combines productivity, identity, Copilot AI, and Agent 365 — a control plane for governing and scaling AI agents across an organization. The Agent 365 dashboard shows connections between agents, people, and data in real time. That’s not a search box. That’s an operational view of your entire professional world.

    Microsoft Graph is the connective tissue. It links LinkedIn professional data — profiles, company updates, job changes, content signals — directly into Copilot’s intelligence layer. When enterprise users ask Copilot about industry experts or companies, LinkedIn data feeds the answer. The integration is deeper than most people realize, and it’s been quietly expanding since Microsoft acquired LinkedIn for $26.2 billion in 2016.

    Bing web cards in Copilot Chat now deliver rich, expandable information cards for weather, stocks, sports, news, and more. It’s a small feature on paper. But it signals the visual direction: Copilot as a personalized front page, not a search box.

    The new Agenda view in Windows — announced at Ignite 2025 — shows a chronological list of upcoming events unified with Calendar, surfaced directly in the Notification Center. Microsoft is literally building a unified daily view into the operating system itself.

    Why the Western Super App Never Happened — Until Now

    WeChat has over 1.3 billion monthly active users and handles messaging, payments, e-commerce, government services, and mini-programs all in one place. Western companies have been trying and failing to replicate that for a decade.

    The reasons for failure are real: U.S. data privacy law, antitrust scrutiny, platform fragmentation, and deeply entrenched single-purpose apps (Slack for chat, Stripe for payments, Google Calendar for scheduling) made the super app strategy a dead end in the West.

    But AI changes the calculus. The old super app required you to rebuild every vertical inside one app. The new super app just needs one AI brain that can use everything outside it. You don’t need to own payments — you need Copilot to understand your Stripe data. You don’t need to own scheduling — you need Copilot to read your Google Calendar and act on it.

    As one analysis of the U.S. super app window put it: “The old super app was ‘one app with everything inside.’ The next super app might be ‘one AI brain that can use everything outside.’” Between 2025 and 2027, the U.S. enters what some analysts call its Super App window — a convergence of AI interfaces, behavioral compression, and digital sovereignty that’s distinctly Western in character.

    Microsoft is the only Western company with the asset stack to pull this off: an OS (Windows), a browser (Edge), a search engine (Bing), a professional network (LinkedIn), a productivity suite (Microsoft 365), a developer platform (GitHub + Azure), and now a unified AI layer (Copilot) stitching it all together.

    What the “Everything Page” Actually Looks Like

    Here’s the vision, stated plainly:

    • Your news — curated by AI based on your industry, interests, and saved searches
    • Your LinkedIn feed — surfaced selectively, not chronologically, based on what actually matters to your business goals
    • Your email digest — key threads, action items, follow-ups, flagged by AI before you even open your inbox
    • Your calendar — not just events, but prep briefs for each meeting pulled from your email, CRM, and LinkedIn history
    • Your automation outputs — Cloud Run jobs, Zapier logs, agent reports, anything your background systems are doing
    • Your health signals — fitness watch data, sleep scores, recovery metrics — not in a separate app, but contextualizing your day
    • Your business metrics — revenue, leads, content performance, wherever your data lives

    All of it on one page. All of it updated in real time. All of it organized by an AI that knows what you consider signal versus noise.

    That’s not sci-fi. The APIs for all of that exist today. The AI to synthesize it exists today. The missing piece is the will to build the page — and a platform with enough trust and install base to make it stick.

    The LinkedIn Angle Nobody Is Talking About

    Here’s where my original observation gets more interesting. Microsoft has spent years sitting on one of the richest professional datasets on earth and doing relatively little with it compared to what’s possible. LinkedIn has 1 billion+ members, decades of career graph data, company relationship maps, content engagement signals — and it feeds directly into Microsoft Graph.

    Now that Copilot is deeply embedded in enterprise environments, LinkedIn data isn’t just a social feature — it’s a professional intelligence layer. When your Copilot brief for a sales call surfaces that your prospect just changed jobs, posted about a pain point, or follows a competitor — that’s LinkedIn data flowing through Microsoft Graph into your daily workflow.

    The scraping friction I noticed? It makes more sense when you consider that Microsoft may be actively working to make LinkedIn data more valuable inside its own ecosystem rather than letting third-party agents extract it freely. They’re not blocking AI — they’re channeling it through Copilot.

    The Risk: Nobody Wants One Company Holding All of This

    It would be dishonest not to acknowledge the obvious counterargument: this is a massive concentration of data and influence in one company’s hands.

    The reason WeChat works in China is partly cultural and partly because the regulatory environment permits it. U.S. antitrust law, GDPR-aligned state privacy rules, and growing public skepticism about big tech data practices all push against a single unified everything app.

    Microsoft’s bet is that enterprise trust — built through compliance features, security architecture, and the corporate IT relationship — gives them the permission that consumer platforms like Meta or X never earned. It’s a reasonable bet. It’s also one that regulators will watch closely.

    If Microsoft Doesn’t Build It, Someone Will

    The technology is not the bottleneck. Any serious developer with access to the right APIs could build a personal everything page today. Connect your Gmail, your LinkedIn (to the extent the API allows), your calendar, your fitness data, your cloud automation logs, and your analytics tools. Build a UI that surfaces what matters. Add an AI layer to summarize and prioritize.

    The bottleneck is distribution, trust, and the cold-start problem — nobody wants to connect all their accounts to something they’ve never heard of. That’s why Microsoft wins this race if they choose to run it. They already have the accounts. They already have the trust relationships. Copilot is already installed in hundreds of millions of enterprise seats.

    But if they don’t move fast enough, or if they build it only for enterprise and ignore the small business and creator class — that’s an opening. A focused, privacy-first, SMB-oriented everything page, built on open APIs, with no data lock-in? That’s a product worth building.

    What This Means for Your Content and AI Strategy Right Now

    Whether or not Microsoft delivers the everything app in the next 18 months, the direction of travel is clear. Professional information is consolidating around AI interfaces. LinkedIn content is increasingly flowing into Copilot’s intelligence layer. Bing-based AI answers are pulling from structured, authoritative content.

    For businesses and content creators, that means:

    • Your LinkedIn presence is now AI training data. What you post, how you structure it, and what entities you’re associated with affects how Copilot describes you to enterprise users asking about your industry.
    • Your website content needs to be AI-readable. Structured data, clear entity signals, authoritative citations — these are no longer optional for AI search visibility.
    • Your automation stack is a competitive advantage. The businesses that have already connected their tools via APIs will be first in line when the everything page actually ships.

    The everything app isn’t coming. It’s arriving in pieces, quietly, through products you already use. The question is whether you’re positioned when the pieces snap together.

    Frequently Asked Questions

    Is Microsoft building an “everything app” like WeChat?

    Microsoft hasn’t announced a single “everything app” product, but the pieces — Copilot, Microsoft Graph, LinkedIn data integration, Agent 365, and Bing web cards — suggest a unified AI-powered dashboard is the strategic direction. Whether it arrives as one product or an ecosystem of connected tools remains to be seen.

    Why did Western super apps fail where WeChat succeeded?

    U.S. data privacy regulations, antitrust scrutiny, platform fragmentation, and deeply entrenched single-purpose apps all prevented a WeChat-style super app from emerging in the West. AI changes the equation by enabling one system to connect and synthesize data across many separate apps without needing to own them.

    How does LinkedIn data connect to Microsoft Copilot?

    Microsoft Graph links LinkedIn’s professional data — profiles, company updates, career changes, content signals — directly into Copilot’s intelligence layer. Enterprise Copilot users receive LinkedIn-informed context in sales briefings, meeting prep, and professional research queries.

    What is Microsoft 365 E7 and what does it include?

    Microsoft 365 E7 (The Frontier Suite, GA May 1, 2026) combines Microsoft 365 E5 for secure productivity, Entra Suite for identity and access, Microsoft 365 Copilot for AI-in-workflow, and Agent 365 as the control plane to govern and scale AI agents across an organization.

    What can small businesses do today to prepare for AI-unified platforms?

    Connect your tools via APIs now, optimize your LinkedIn presence for AI entity recognition, publish structured authoritative content for AI search visibility, and build automation stacks that produce clean data outputs — these investments compound in value as AI platforms consolidate professional information.