- Product Tapas
- Posts
- AI Jobs Paradox, OpenAI Product Blitz, Apple Face Computers
AI Jobs Paradox, OpenAI Product Blitz, Apple Face Computers
Plus: Design homogenisation, meeting prep automation, agent architecture strategies

We track Product so you don't have to. Top Podcasts summarised, the latest AI tools, plus research and news in a 5 min digest.
Hey Product Fans!
Welcome to this week’s 🌮 Product Tapas.
New here? We're your shortcut to staying sharp. Essential stories, practical tools, real insights.
For the best reading experience, check our web or app version and sign up for future editions here.
What’s sizzling? 🥘
AI's employment paradox reaches peak absurdity (grad hiring down as AI has 'more experience', yet grad jobs needed to get experience), OpenAI's product blitz goes completely mental (everything all at once), and Apple ditches Vision Pro upgrade ($3,500 headsets out, face computers in). Meanwhile, developers hit 90% AI adoption despite not trusting the outputs (classic).
📰 Not Boring → AI jobs paradox, product release madness, productivity tool evolution, trust contradictions
⌚️ Productivity Tapas → Interactive video demos, AI meeting prep, design review automation
🍔 Blog Bites → Organisational change narratives, design greyification crisis, AI agent architecture strategies
🎙️ Pod Shots → Everything you need to know about AI Evals: measure what actually works beyond vibes
Let's go 🚀
📰 Not boring
The Great AI Jobs Paradox
Anthropic's CPO admits they rarely hire fresh grads as AI takes over entry-level tasks - because they lack experience. The irony? Entry-level jobs are exactly what used to give them that experience.
AI Agents Arrive at Citi; they're running a 5,000 person pilot to find out how helpful the new "agentic" technology is to staff in areas like research and client profiling. Key question: does 'helpful' mean more effective or we no longer need these 5,000 people…?
Accenture plans on 'exiting' staff who can't be reskilled on AI amid restructuring strategy (lots of noise, but worth noting it's 1.4% headcount)
We've reached peak AI employment paradox. Companies won't hire entry-level workers because AI does their jobs, but then those same workers lack experience that... comes from entry-level jobs. It's a perfectly circular problem that nobody wants to solve because the economics are too compelling. Citi's 5,000-person "pilot" feels very much not about helpfulness - much more about finding out how many humans they can replace whilst maintaining the fiction that it's about productivity enhancement.
The AI Product Blitz Continues
Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents and coding supremacy. It can run autonomously for 30 hours straight
OpenAI takes on Google and Amazon at once with new agentic shopping system where US customers can make Etsy and Shopify purchases within chats
OpenAI launches Pulse - an AI assistant that connects across platforms and proactively suggests actions. Ben Evans notes it resembles Google Now but powered by LLMs (so it might actually work)
OpenAI will reportedly release a TikTok-like social app alongside Sora 2
The AI product release cycle has gone completely mental. OpenAI is launching everything from shopping assistants to TikTok clones, whilst Anthropic's 30-hour autonomous agents sound either revolutionary or terrifying depending on your perspective. The Google Now comparison for Pulse is spot on - proactive AI might finally work because LLMs can actually understand context, unlike the keyword-matching disasters of the past.
Productivity
Claude Code now supports a new Figma MCP connector. This allows it to analyse design files at a granular level - examining components, design tokens, layout specifications, and more - then generate production-ready code
Figma Make now also supports MCP
Your favourite note taking app and mine, Granola, launches Chat and Recipes
Microsoft sets the tone for 'Vibe Working' with new Agent Mode in Word and Excel
Granola keeps going from strength to strength with each new feature adding genuinely useful workflow improvements. Meanwhile, the design-to-code pipeline is finally getting serious. When both Claude and Figma are pushing MCP integration for granular design analysis, we're looking at the potential end of the designer-developer handoff nightmare. Microsoft's "Vibe Working" branding is questionable, but the underlying capability - AI agents that understand your work context - is the future of productivity software.
The Trust Paradox Deepens
Google have released their 2025 Dora report showing how developers are using AI. AI adoption among developers hit 90% (up 14%), with 80%+ reporting productivity gains - though a "trust paradox" persists where many use AI tools heavily despite limited trust in their outputs
Another week, another report on the upsides of AI. This time from the FT - America's top companies keep talking about AI but can't explain the upsides. Report cites market differentiation as the most common benefit
90% adoption with limited trust is the defining characteristic of this AI moment. Developers are using tools they don't fully trust because the productivity gains are undeniable, whilst executives claim "market differentiation" benefits that sound suspiciously like everyone doing the same thing. The FT's observation about companies unable to explain AI upsides is brutal but accurate - most AI implementations feel more like table stakes than competitive advantages.
Hardware Pivots and Platform Plays
Apple shelves Vision Pro headset revamp to prioritise Meta-like AI smart glasses
Amazon is overhauling its devices to take on Apple in the AI era
Gemini comes to Google TV. Distribution continues to be king, especially when the products are all homogenising around the same quality for most people
Apple's Vision Pro pivot tells you everything about where hardware is heading - away from expensive, isolated experiences toward ubiquitous, AI-powered wearables. The smart glasses race is heating up because they solve the fundamental problem of AI interfaces: how do you interact with intelligence that's supposed to be everywhere? Google's TV integration is the quiet winner here - when AI capabilities commoditise, distribution becomes everything.
Odds and Ends
Perplexity search API gives developers access to the full power of the Perplexity index
Meta introduces Vibes feed for AI-generated content
Google Labs has launched an AI Mood Board tool called Mixboard
Amazon to pay $2.5 Billion to settle claims it tricked Prime customers
Perplexity's API democratises access to their search intelligence, opening up interesting possibilities for developers who want to build on top of their index. Meta's "Vibes feed" for AI content feels like an admission that the platform is becoming increasingly synthetic - at least they're being honest about it. Google's Mixboard mood tool is the kind of experimental feature that either becomes essential or disappears entirely. Amazon's $2.5B dark patterns settlement is a rounding error for them but sets an important precedent - deceptive UX design finally has real financial consequences.
⌚️ Productivity Tapas: Time-Saving Tools
Qudemo: Turn Your Video Demo Into Interactive Demo Viewers can ask questions, get instant answers and jump to the exact moment in the video where the answer is explained, turning demos into real conversations
Ambient: Daily Briefing: get a daily email that preps you for every meeting. It pulls from LinkedIn, the web, transcripts, and past notes to give you a quick rundown on who’s in the room, what they care about, and what’s worth asking
Zeplin AI Design Review: Catch design issues before sharing with devs: Layout inconsistencies, missed token/component usage, accessibility issues, typos
Remember. Product Tapas subscribers get our complete toolkit - 460+ personally tailored, time-saving tools for PMs and founders. Your shortcut to efficiency and what's hot in product management 🔥
Check the link here to access.
🍔 Blog Bites - Essential Reads for Product Teams

Strategy: Why "Trying Something" Often Leads Nowhere
We’re back with another great piece from John Cutler where he explores the paradox of organisational change: whilst we intuitively understand that systems get stuck in patterns, our tendency to create neat narratives leads us to ineffective solutions. He demonstrates how to work with this human tendency rather than against it to create meaningful change. Read the full article here.
💡 "It wasn't about rejecting the narrative fallacy outright, but about co-authoring a better story."
Key Takeaways
• Systems Thinking: Humans intuitively understand attractor states - that systems revert to patterns unless external factors change; We sense when we're stuck but struggle to identify what actually needs changing; Companies pour billions into narratively pleasing solutions only to end up back where they started
• Narrative Psychology: We're narrative-producing creatures who weave neat causal stories after events occur; The "narrative fallacy" leads us to convenient, coherent-to-us answers that may not address root causes; People want to be co-authors of solutions, not just recipients of theoretical frameworks
• Change Strategy: Don't challenge existing narratives head-on - work within them whilst creating space for better practices; Provide stakeholders with stories they can support whilst experimenting with complexity-aware approaches; Focus on what people can do, not just what they should think about
• Practical Application: Frame changes in terms that satisfy narrative needs ("building stronger habits for predictability"); Create shared language that makes it safe to surface problems early; Explore hidden incentives that drive problematic behaviours
• Leadership Approach: Balance systems awareness with practical action that people can rally behind; Hold yourself accountable with regular check-ins on meaningful measures; Work collaboratively to redefine what success actually means in context
Design: The Great Greyification - Why Everything Looks Boringly the Same
As a car fan this post from Craig Unsworth resonated HARD. JFC everything is so similar and boring. In it he explores (rants, but in a very nice way) about the cultural flattening of design across industries, from automotive to fashion to interiors. He argues that our collective retreat into safe, grey neutrality is eroding brand distinction and cultural personality. Read the full article here.
💡 "If everything looks and feels the same, what's the point in choosing one brand, product, or place over another?"
AMEN.
Key Takeaways:
• Cultural Impact: Design homogenisation is eroding distinctive cultural traits across regions; We're losing eccentricity in Britain, boldness in America, and flair in Europe; The result is a steady cultural flattening that makes places and brands forgettable
• Industry Examples: Automotive: Modern cars are indistinguishable silver/grey SUVs versus distinctive silhouettes of 30-40 years ago; Fashion: Luxury brands (CÉLINE, SAINT LAURENT, BURBERRY) have abandoned heritage typography for identical sans-serif block capitals; Interiors: Paint companies launch dozens of grey shades while bold colours quietly disappear from charts
• Business Consequences: Brands miss opportunities to differentiate in crowded markets; Products don't stick in consumer memory when they lack visual distinction; Safe design choices lead to forgettable brand experiences
• The Root Cause: Design by algorithm rather than imagination drives decision-making; Risk aversion leads to template-based approaches across industries; Functional optimisation (like wind-tunnel testing) prioritised over emotional connection
• The Solution: Bring back unapologetic colour, texture, and pattern in design choices; Embrace individuality and confidence in brand expression; Resist the urge to strip everything back to safe neutrality
Product Strategy: Why Your AI Agent's Architecture Matters More Than Its Accuracy
Umang explores the critical gap between building capable AI agents and creating ones users actually trust and adopt. The key insight: architectural decisions shape user experience more than raw performance metrics.
💡 "Users don't trust agents that are right all the time. They trust agents that are honest about when they might be wrong."
Key Takeaways:
• Architecture Layers: Four critical decision points determine agent success - Context & Memory: What your agent remembers and for how long; Data & Integration: Depth of system connections and access levels; Skills & Capabilities: Specific functions that create user dependency
• Orchestration Patterns: Choose complexity based on actual needs, not imagined ones - Single-Agent: Start here for most use cases - simple, debuggable, predictable; Skill-Based: Use when efficiency matters - specialized agents with routing logic; Workflow-Based: Enterprise favourite for compliance and auditability
• Trust Strategies: Transparency beats perfection for user adoption - Confidence calibration: Match stated confidence with actual accuracy rates; Reasoning transparency: Show users the agent's decision-making process; Graceful escalation: Smooth handoffs to humans with full context preservation
• Implementation Reality: Most teams over-engineer from the start - Begin with single-agent architecture handling 80% of use cases; Add complexity only when hitting genuine limitations; Focus on user trust over technical sophistication
• User Psychology: Complex problems require different approaches than simple queries - Users abandon agents after first complex failure, regardless of routine success; Admission of uncertainty builds more trust than confident mistakes; Context preservation across interactions creates illusion of understanding
🎙️ Pod Shots - Bitesized Podcast Summaries
Remember, we've built an ever-growing library of our top podcast summaries. Whether you need a quick refresher, want to preview an episode, or need to get up to speed fast - we've got you covered.
Check it out here
🤖 Everything You Need to Know About Evals.
Hey there!
This week I’m trying something different – I sent you this week’s Pod Shot on Monday as a separate deep dive. It’s a cracker from Lenny on AI evaluations featuring experts from GitHub and Google.
What you'll discover:
• 🎯 The evaluation hierarchy that beats vibes every time
• 🔄 LLM-as-judge techniques that actually work
• 📊 Real-world frameworks from companies shipping AI at scale
• ⚡ Quick wins you can implement this week
Whether you're building your first AI feature or scaling to millions of users, this one's packed with actionable insights.
Missed the separate email? Check your inbox for "🎙️ Pod Shots - The AI Evaluation Playbook That Actually Works" or just follow the link below 😉
That’s a wrap.
As always, the journey doesn't end here!
Please share and let us know what you would like to see more or less of so we can continue to improve your Product Tapas. 🚀👋
Alastair 🍽️.
Reply