• Product Tapas
  • Posts
  • 🎙️ Pod Shots - Bitesized Podcast Summaries

🎙️ Pod Shots - Bitesized Podcast Summaries

💡 The Uncomfortable Truth: Most People Asking for AI Agents Actually Just Need Workflows

TODAY’S POD SHOT

Most people asking for AI agents actually just need workflows—and that gap between hype and reality is costing companies money. Wade Foster, CEO of Zapier, reveals what actually works in production today: the "AI automation spectrum," why small focused agents beat general-purpose ones, and how to go from 0 to 97% AI adoption in under a year.

Hey there!

Remember, we've built an ever-growing library of our top podcast summaries. Whether you need a quick refresher, want to preview an episode, or need to get up to speed fast – we've got you covered. Check it out here 

— Alastair

🎥 Watch the full episode here

📆 Published: October 6th 2025 🕒

Estimated Reading Time: 8 mins. Time saved: 30+ mins! 🔥

💡 The Uncomfortable Truth: Most People Asking for AI Agents Actually Just Need Workflows

This is a super practical Pod Shot for anyone wondering where to start with agents and agentic workflows - I loved it and hope you do to!

Wade Foster, co-founder and CEO of Zapier, cuts through the AI agent hype with a surprising insight: most people who say they want an AI agent actually just need a deterministic workflow. After 14 years building automation infrastructure and watching thousands of companies deploy AI, Foster reveals what actually works in production versus what burns tokens without delivering value.

Foster demonstrates his personal email triage agent that reduces 100+ daily emails to fewer than 10 needing attention, explains the "AI automation spectrum" from deterministic workflows to full inference-based agents, and shares how Zapier achieved 97%+ internal AI adoption in under a year. His insights are battle-tested across 8,000+ app integrations and thousands of customer deployments—making this essential for product leaders and founders figuring out where AI delivers ROI today.

  • 🎥 Watch the full episode here

  • 📆 Published: 26th October 2024

  • 🕒 Estimated Reading Time: 5 mins. Time saved: 30+ mins! 🔥

AI Agents, Clearly Explained in 40 Minutes | Wade Foster (Zapier) | Peter Yang

🗺️ The AI Automation Spectrum: From Determinism to Pure Inference

Foster introduces Zapier's "AI automation spectrum" framework. On the left: pre-AI deterministic workflows where a new lead triggers an SMS and gets added to Salesforce—clear input, predictable output. On the right: pure inference agents with tools, knowledge, and instructions that reason independently.

"Most of what people talk about as agents today, I call them chat agents," Foster explains. "They're inside of a chatbot." But he argues this is too narrow. Real power emerges when you consider all possible triggers—new emails, customer queries, scheduled tasks—that could wake an agent to take action beyond just posting a message.

The critical insight? The most reliable AI systems in production today live in the middle. Foster identifies AI workflows (traditional workflows with AI steps) and agentic workflows (multiple orchestrated agents) as what actually works. "The reliability of these full systems is just not great for more complex tasks," he notes about pure inference agents. The middle ground offers determinism for reliability and cost control whilst leveraging inference where it genuinely adds value.

Key Takeaways:

  • The spectrum represents different architectural choices, not a linear progression where agents replace workflows

  • Production AI systems that work today are mostly AI workflows or agentic workflows combining determinism with selective inference

  • Chat agents are too narrow a definition—real opportunities emerge from agents triggered by any event

  • Choose determinism where possible and inference only where necessary for reliability and cost advantages

📧 Live Demo: The Email Triage Agent That Cuts Through 100+ Messages

Foster demonstrates his personal email categorisation agent built with his EA. It processes every incoming email through three categorisation tiers using natural language instructions. Red-flag items requiring Wade's action: executive communications, board matters, strategic partnerships, escalated support, hiring decisions. Tasks for his EA: meetings, scheduling, travel, HR notifications. Informational emails for archiving: marketing, promotional, spam, internal updates.

The agent goes beyond simple sorting. For customer emails, it queries Zapier's HubSpot for company details and performs web searches for account context. It then applies labels and begins triaging—archiving what doesn't need attention and flagging what does.

The result: "At the end of the day, when you look across like, I don't know, 100 plus emails I might get, it turns out less than 10 actually really need my attention and matter." This isn't inbox zero through aggressive filtering—it's applying intelligent context to surface what genuinely requires human judgement.

Foster emphasises the agent's narrow focus. He has separate agents for responses and other tasks. "You want them focused in on, you know, a concrete job that it can actually go do. The bigger the task you provide it, the more I find it starts to get confused or the reliability goes down."

Key Takeaways:

  • Effective agents are narrow and focused on specific tasks rather than trying to handle everything

  • Natural language instructions allow sophisticated logic without rigid keyword matching

  • Integration with existing tools provides context that deterministic rules can't achieve

  • The goal isn't eliminating all work—it's surfacing the 10% needing human attention from the 90% that doesn't

🔄 Why This Works as an Agent Instead of a Workflow

"I don't think you could do this as a workflow because I would have to come up with a deterministic rule for all of those categories," Foster explains. The workflow approach would require explicit keyword rules for every scenario—email contains "interview"? Route here. Contains "board meeting"? Route there. Permutations become unmanageable, and language doesn't work that way.

An LLM-powered agent understands intent from description. "An LLM you can just describe like emails like this, I would like you to do this action with it. And so it can extrapolate based on the instructions I've given it."

Agent building is iterative. "If you were to start working on this, you would probably just give it, you know, 'Hey, help me triage my inbox.' But then over the course of like a couple days or weeks, you'll start to build it up where the agent tackles bigger and bigger emails for you."

This represents a shift in knowledge work. "Instead of me sitting down and answering all these emails, what I'm going to be doing is talking to the agent and saying like, 'What new instruction can I provide or what new guidance can I give it so that it does a better job?'"

Key Takeaways:

  • Agents excel when you need inference and pattern recognition rather than rigid rules

  • Building effective agents is iterative—start simple and refine based on performance

  • The future of knowledge work involves tuning agents rather than doing tasks directly

  • Agents work best when describing desired outcomes is easier than encoding explicit rules

🎯 The 90% Rule: Where Humans Still Matter

"AI is really good at the like middle parts of the work," Foster explains. "You still need humans to kind of kick this off and figure that out and you still need humans at the other side to review it." The input phase requires human judgement to frame problems properly. The output phase needs review for quality and edge cases. But the middle? That's where AI handles mechanical work following clear patterns.

Foster sees this in sales: "A lot of folks are trying to figure out how can I take sales out of the process and I found that doesn't quite work yet but if you can equip the sales rep with all this context now they can walk into the meeting really well informed." The AI handles research and drafting. The human brings judgement and relationship skills.

This creates a new work category: reviewing and refining AI outputs. "In the future we're going to have just a lot of jobs where people are reviewing the outputs of these agents and saying, 'Oh yeah, that looks good' or tuning the agent to deliver better outputs."

Foster praises products allowing prompt customisation. He highlights Monologue, a voice-to-text tool capturing screen context that lets you tune formatting for different tools. "They allow me to kind of come in and fiddle with that stuff... I want it to sound like me."

Key Takeaways:

  • AI excels at the middle 90% of work once humans frame inputs and review outputs properly

  • The goal isn't eliminating humans—it's amplifying their effectiveness

  • Products allowing customisation of the final 10% create stickier, more valuable experiences

  • A new work category is emerging: tuning AI systems rather than doing tasks directly

🔧 APIs vs MCPs: Complementary Not Competitive

"An API is a very specific request that you give it," Foster explains. "You give it very specific inputs and expect very specific outputs in return." It's deterministic and efficient.

MCPs operate differently. "It lets the agent reason about what exactly it's going to call." An MCP might wrap multiple APIs, accept unstructured data, and decide how to format and route requests. "MCPs act as that agent-to-agent interface where they can communicate with each other in less structured ways."

These approaches complement each other. "If you want reliability, cost advantages, you're always going to opt for APIs because you're going to be able to do exactly what you want at high scale low-cost." MCPs solve different problems: "You may not know exactly what the input is, but you know it's roughly in this shape and you need somebody to reason about it. You're going to spend more tokens, have reliability challenges, but it tackles a use case you couldn't do with an API."

Key Takeaways:

  • APIs provide deterministic, cost-effective integration with structured inputs and outputs

  • MCPs enable agent-to-agent communication with unstructured data and reasoning

  • The two approaches are complementary tools for different points on the determinism-inference spectrum

  • Default to simpler, deterministic solutions (APIs) unless you specifically need inference capabilities

🎪 The "RIP Zapier" Moment: When OpenAI Released Agent Builder

When OpenAI released Agent Builder, AI influencers declared Zapier obsolete. Foster's response is instructive for anyone building in the AI space. "As soon as you logged in and used Agent Kit, you could tell these are not the same thing at all," he notes. The automation community—people actually using these products—recognised the difference immediately.

Agent Builder helps create better chat agents and extract improved responses from language models. "It solves a fundamentally different problem." It doesn't orchestrate across triggers, doesn't work with thousands of tools (unless using Zapier's MCP server), and doesn't operate model-agnostically.

Foster sees a broader pattern: "There's a lot of influencers and content creators whose job is to create hot takes and clicks." The incentives favour declaring things dead over nuanced analysis. "You can tell who is using the tools and who isn't, who is just regurgitating headlines."

For founders, the lesson is clear: actual users understand nuance that hot-take merchants miss. When evaluating competitive threats, focus on what products actually do, not surface-level similarities. OpenAI wants to make chat their app store and increase ChatGPT engagement, not replace workflow automation infrastructure.

Key Takeaways:

  • Influencers optimise for engagement, not accuracy—trust people actually using tools

  • OpenAI's Agent Builder solves different problems (better chat agents) than workflow orchestration

  • Major platform moves often aim to increase core product engagement rather than replacing categories

  • When evaluating threats, focus on actual functionality, not surface similarities

🚀 From 0 to 97%: Making Zapier AI-First

Zapier's journey to effectively 100% internal AI usage offers a practical playbook. "As cringe as the CEO memo is around we must use AI, I do think it's important," Foster acknowledges. "You have to say it because if you don't, how will people know this is important?" But the memo alone means nothing without organisational support.

Zapier's approach centres on three tactics. First: hackathons and boot camps giving everyone dedicated experiment time. "Important to do it for everyone in your company, not just engineering," Foster emphasises. Sales, accounting, marketing—everyone needs hands-on-keyboard time to realise "it's not as hard as I thought."

Second: judging and demos creating accountability and knowledge sharing. Demos expose what's possible. "They get to see, oh, what did you do with that tool? How did you prompt in that particular way?" The tips and tricks people discover spread through demonstration.

Third: ongoing showcase in all-hands meetings. "Have somebody come in and do show-and-tell with something they're building with an agent. Mix it up between functions." Continuous exposure builds culture over time.

Perhaps most importantly, hands-on experience addresses fear. "The moment you create space for people to put their hands on the keyboard, the fear fades because they see what's possible and what it's not good at. They realise they're still required for this part and this part."

Timeline? "If you do this for 6 months, for a year, you will see your company go from low AI adoption to most people using it."

Key Takeaways:

  • CEO signal matters but means nothing without organisational support structures

  • Hackathons should include all functions—everyone can benefit from automation, not just engineering

  • Demos and knowledge sharing drive adoption more effectively than competition

  • Hands-on experience is the best antidote to AI job fear—people see both capabilities and limitations

  • Expect 6-12 months to achieve company-wide adoption with sustained effort

💼 Testing for AI Fluency: How Zapier Evaluates Job Candidates

As AI fluency becomes core competency, Zapier has evolved candidate assessment. Foster's candid: "We're still learning honestly how to do this best."

Early approaches were simple self-reporting. "At first it was just, hey, tell me what you're doing with AI." This worked initially when impressive examples were rare—you could quickly differentiate who was actually using AI versus talking about it.

Now Zapier uses practical tests simulating real work. For PM roles: provide a task and prompt, "You can use AI, use whatever tools in an hour. Let's see how far you've gotten." For marketing: "Give you a document with a campaign idea and customer data. Create a campaign targeting these customers, use any tool, show us what you did in an hour."

"You just watch people work and see how they use the tools, how they use their creativity," Foster explains. This reveals not just technical proficiency but judgement—do they use AI appropriately? Verify outputs? Iterate effectively?

Foster's particularly concerned about outsourcing thinking to AI without adding value. "Documents all of a sudden got much more polished when ChatGPT came out, but the substance maybe didn't change all that much." He draws a crucial distinction: "You can outsource the work to AI but you can't outsource the accountability. You still need to understand what is this AI doing? Is it solving the problem? Will it deliver the results I care about?"

Key Takeaways:

  • AI fluency assessment should be practical and domain-specific, not generic knowledge tests

  • Watch how candidates use tools in realistic scenarios rather than relying on self-reporting

  • Distinguish between candidates who collaborate effectively with AI versus those who outsource thinking

  • The key competency is judgement about when and how to use AI, not just prompting ability

  • Accountability cannot be outsourced—humans must understand and validate outputs

🗓️ The Calendar Test: Finding Automation Opportunities

Foster offers a brilliantly simple heuristic: look at your calendar. "Just go look at it and be like, 'Hey, I spent a lot of time interviewing. I wonder what things I could do to make my interview process work better.' Or 'I spent a lot of time talking to customers. What could I do to make my life easier?'"

Activities consuming your time are prime candidates for workflow enhancement—not replacement, but augmentation. Hours in customer calls? Maybe an agent generates prep docs automatically. Drowning in interviews? Perhaps automation pulls candidate context beforehand.

Start small. "You can start to come up with little bits where you're like, I could automate this piece of it, this piece of it. Before you know it, you'll realise maybe I could chain these things together and do something way more impressive. But start with a small thing like, can I just generate a prep doc for this interview?"

This mirrors his email agent philosophy: focus on specific, achievable wins building confidence and understanding. Trying to automate entire processes at once usually fails. Breaking into components and tackling sequentially actually gets you there.

The calendar approach naturally prioritises high-impact opportunities. If something appears repeatedly on your schedule, automating even small parts compounds quickly. Five hours of weekly meetings? Even small improvements per meeting add up significantly.

Key Takeaways:

  • Your calendar reveals highest-volume activities and therefore best automation opportunities

  • Start with small, specific tasks within larger processes rather than trying to automate everything

  • Generate prep docs, summaries, or context rather than eliminating activities entirely

  • Build up from narrow tasks to impressive workflows as you gain confidence

  • High-frequency activities offer best ROI—small improvements compound when repeated weekly

🔮 The Orchestration Layer: Why Small, Focused Agents Win

Foster's most important architectural insight: build complex AI systems through orchestration of small, focused agents rather than one massive general-purpose agent. "The more general the agent becomes, it can certainly be impressive," he notes, citing ChatGPT. "But if you want a tool to do one thing very well, it's much better if you can build an agent specifically for that."

The principle is the "Goldilocks zone"—giving agents the smallest amount of information, context, and tools that still enables job completion. "The more tools you give it access to, the more context you give it, the more likely it is to get confused, burn through tokens. It doesn't actually make it better."

For complex tasks? Orchestration. "Build agents for each part of the end task and then orchestrate to figure out how to get these agents to hand off context from one to the other." That orchestration can be deterministic (explicit handoffs) or agentic (agents choose next steps).

Foster sees Zapier's 14-year-old workflow engine perfectly positioned: "It turns out this workflow engine we built over the last 14 years is really good at connecting agents together." What started as deterministic automation infrastructure becomes the orchestration layer for agentic systems.

Key Takeaways:

  • Small, focused agents with minimal tools and context outperform large general-purpose agents for specific tasks

  • Complex workflows should orchestrate multiple focused agents rather than building one massive agent

  • Orchestration itself can be deterministic (explicit handoffs) or agentic (agents choose next steps)

  • The "Goldilocks zone" of minimal-but-sufficient context is key to agent performance

  • Existing workflow infrastructure designed for deterministic automation translates well to agent orchestration

🎯 Getting Started: What This Means for Product Teams and Founders

Start by signing up for an automation platform and typing your ideas into it. "If you don't have an idea you can ask it for ideas," Foster notes. The calendar exercise provides concrete starting points: identify recurring time-consuming activities and break them into components. Can you automate prep work? Generate summaries? Triage inputs? Each represents a discrete automation opportunity.

For product builders, the lessons go deeper. The spectrum framework helps identify where deterministic workflows suffice versus where true inference adds value. The 90% rule suggests product opportunities at the boundaries: tools helping humans frame inputs for AI systems, and tools making it easy to review and refine outputs. Products Foster praises—Monologue, Granola—excel because they let users customise that final 10%.

Foster's emphasis on small, focused agents over general-purpose ones applies equally to product design. Rather than building one AI feature trying to do everything, consider focused capabilities users can compose. This mirrors broader software design principles but becomes critical when reliability and token costs matter.

For organisations, the adoption playbook is clear: CEO signal plus hands-on time plus knowledge sharing plus ongoing showcase. The timeline is 6-12 months, not overnight. But companies moving now build advantages while competitors argue about whether AI matters.

The overarching message is pragmatic optimism. Yes, full autonomous agents remain challenging for complex tasks. But the middle ground—AI workflows and agentic workflows—delivers tremendous value today. The companies winning aren't waiting for AGI. They're deploying focused automation where it works and building organisational muscle to identify new opportunities as capabilities expand.

Key Takeaways:

  • Start with your calendar to identify high-frequency activities worth automating

  • Break complex processes into components and automate pieces rather than everything at once

  • Product builders should focus on helping users frame inputs and refine outputs—the human-AI boundaries

  • Small, focused AI capabilities that compose beat trying to build one feature that does everything

  • Organisational AI adoption takes 6-12 months of sustained effort with the right structure

  • The opportunity today is in the middle of the spectrum—AI workflows and agentic workflows—not pure inference agents

🔗 Links Referenced:

That’s a wrap.

As always, the journey doesn't end here!

Please share and let us know whether you liked this separate Pod Shot, or whether you’d rather stick with your usual programming….. 🚀👋

Alastair 🍽️.

Reply

or to participate.