- Product Tapas
- Posts
- OpenAI’s Slower Smarter Strawberry AI, Apple’s New Lineup, Huawei’s Triple-Fold
OpenAI’s Slower Smarter Strawberry AI, Apple’s New Lineup, Huawei’s Triple-Fold
Plus: Scaling Product teams, Vertical Apps on the Rise, Nvidia deep dive
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/8351714e-2698-4dec-b3b0-8823c18cf512/Product_TAPAS_2.png?t=1700318865)
We track Product so you don't have to. Top Podcasts summarised, the latest AI tools, plus research and news in a 5 min digest.
Hey Product fans!
Welcome to this week’s 🌮 Product Tapas.
If you’ve been forwarded this or just fancy the best reading (and listening!) experience, check out the mobile app or web version. You can sign up and check previous editions here.
What’s sizzling this week? 🧑🍳
📰 Not Boring - We’re shortening the news intro as it seems pointless to duplicate a short section in the main newsletter…. That said, the big news of the week is that overnight OpenAI dropped a new series of AI models with 'reasoning' abilities. Codenamed Strawberry, the new models (o1 and o1-mini) “can reason through complex tasks and solve harder problems than previous models in science, coding, and math.“ Read below for more and to catch-up on what else has keeping the tech world spinning.
⌚ Time-Saving Tools & GPTs - Similarly, I’m trailing dropping the intro to the time-saving tools. Read on, or click the link (web/app) to dive straight to the section.
🍔 Blog Bites - In this week’s essential reads for product teams, we dive into the end of the ad-supported app era, the rise of vertical apps with stronger monetisation, and why hobby-based apps like Strava are becoming the new social networks. Plus, a guide on scaling your product team, and a case study on Nvidia’s visionary CEO, Jensen Huang, who transformed the company from near-bankruptcy into an AI giant.
🎙️ Pod Shots - Finally, this week’s featured podcast discusses AI’s diminishing returns and the myth of unlimited scaling. Princeton’s Arvind Narayanan argues that the future of AI models lies in optimising for efficiency rather than bigger and more powerful models. If you're interested in where AI is heading and the challenges it faces, don’t miss this summary.
Plenty to get stuck into - off we go! 🚀👇
📰 Not boring
Huawei has a new triple-fold phone that costs more than a Macbook Pro
Here’s everything Apple announced at its recent phone and watch event: iPhone 16, iPhone 16 Pro, Apple Watch Series 10, AirPods 4
Plus they will start selling AirPods with built-in hearing aids
But, on the other side of the pond EU's top court rules Apple must pay 13 billion euros in back taxes
Mistral releases Pixtral 12B, its first multimodal model
Sony announces the new faster, more powerful PS5 Pro
OpenAI to released it’s thinking ‘Strawberry’ AI model
It’s slower, can’t connect to the internet yet and takes more time to think through complex problems but is much better at problem solving. So something you toggle on/off in your ChatGPT usage
They’re also fundraising $6.5bn, to give a valuation of $150bn
Love it or hate it, OnlyFans also does serious numbers ($6.3bn revenue in 2024)
Mastercard launches a new crypto debit card in Europe through a partnership with Mercuryo (spend over 40 different cryptocurrencies directly from your self-custodial wallet)
SpaceX launches Polaris Dawn, where astronauts will venture farther than any humans in more than 50 years
Google Co-Founder Sergey Brin Is Back at the Company 'Pretty Much Every Day working on AI
Audible launches a beta product that allows you to create a voice replica and apply to be a narrator and earn revenues
Google’s NotebookLM app can now generate ‘lively’ audio discussions with two AI hosts about the documents you’ve given it
I think it’s based on Google Illuminate - check out examples here. Whilst it may seem just ANOTHER way for content to get out there, it could also make PRDs and other product docs much easier to digest for a lot of people 💡
Glean Secures $260M in Series E Funding to help continue to solve enterprise knowledge fragmentation using AI
European VC Atomico closes $1.24B funding round across two funds
⌚️ Time-Saving Tools & GPTs
Keak: AI agent that auto-improve your websites including running AB tests
Mokkup: Bring your paper sketches to life and create aesthetic designs for free
Mapify: Summarise YouTube, PDFs/Docs, URLs, podcast and meeting recordings into mind maps in seconds
Genkin.ai: track your cash flows using a chat based system, and in-depth analysis of your spending
Extract AI: Automate data extraction from raw text into structured JSON, spreadsheets, and workflows.
🍔 Blog Bites - Essential Reads for Product Teams
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4a80783b-2423-4b6e-b856-ed2301320082/blog_bites.png?t=1706476215)
Andrew Chen’s latest article argues it may no longer be possible to build a broadly horizontal (mass market, catch all) app like YouTube, TikTok etc. any more, because we’re in the final years of the mobile S-curve and there are now major hurdles to overcome to do so.
Why broadly horizontal apps are hard:
- the novelty effect has worn off on new app ideas
- retention is more elusive than ever, because of the competition
- building an ad-supported startup is sort of a "two miracle" problem that takes years to nail
- easy growth is mostly over.
So what’s next?
Vertical apps with beefier monetisation, and different network characteristics:
- Rather than ads, these products often let customers spend big dollars directly to upgrade their experience
- network effects can work differently; networks can be built around specific activities and interests
- we'll see more apps focused on single user utility, and the use of game design mechanics (as Duolingo has done), to create stickiness
Trends: Goodbye Tinder, hello Strava: have ‘hobby’ apps become the new social networks?
In a related vein, this interesting piece in the Guardian talks about how millions are rejecting the culture-war hotspots of the major social media sites in favour of apps dedicated to activities they enjoy, while bonding with their fellow users.
This is leading to unintended use-cases for these apps as some use services like Strava to meet future partners, “seeking refuge in apps that promise to connect them to people with whom they have common interests.”
Certainly an interesting trend to keep an eye on for those designing future social apps.
Learn: How to scale a product company; Leaders’ guide to growth
In this recent Mind The Product piece, Marina Stojanovski, Head of Product at Gradyent, dives into the key challenges and strategies for scaling a business, from evolving company culture to optimising processes and technology.
Scaling sounds impressive and desirable. It signifies a company's success, indicating that it has found product-market fit and is now on a positive growth trajectory.
But scaling is not easy. Behind the signs of success lie significant challenges, and navigating these effectively during this critical growth phase is crucial.
Scaleup is a different journey than a startup. Key areas covered:
People and culture: Hire for evolution | Innovators, accelerators or maintainers | Be prepared for less commitment from new hires | Empower teams and let go of control
Operations and technology: Funding for scale | Scalable processes in every part of your organisation | Scalable technology
Organisational design: New specialised roles | Scalable team structure, more accountability |
Case Study: The Future Of Technology Belongs To One Man, Jensen Huang
Given the prominence of Nvidia and everything it does it’s probably worth finding out a bit more about the driving force behind it.
This article from Bill Kerr dives into the history of Jensen Huang, co-founder and CEO of Nvidia. It covers how he headed up Nvidia's rise through bold bets on GPUs and AI. Despite challenges like near bankruptcy, Huang's focus on innovation, such as developing CUDA and AI-driven data centres, has kept Nvidia at the forefront.
Visionary leadership: Jensen Huang's foresight in betting on GPUs and AI has kept Nvidia at the cutting edge of technology
Overcoming adversity: Nvidia faced near bankruptcy and market stagnation but bounced back through innovation and calculated risks.
CUDA development: The introduction of CUDA revolutionized high-performance computing, accelerating tasks by up to 20 times.
AI dominance: Nvidia is a key player in the AI revolution, providing the hardware for machine learning, data centers, and AI applications.
Flat management: Huang leads with a non-traditional, flat organizational structure, with a focus on agility and direct involvement.
Longevity and innovation: Nvidia thrives by continuously investing in long-term, forward-looking technologies, creating markets where none previously existed.
Cultural impact: Nvidia has played a significant role in industries beyond tech, including gaming, automotive, and healthcare.
🎙️ Pod Shots - Bitesized Podcast Summaries
Arvind Narayanan is a professor of Computer Science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a big proponent of the AI scaling myths around the importance of just adding more compute. He is also the lead author of a textbook on the computer science of cryptocurrencies. In his recent 20VC podcast he covers a broad range of topics including why more compute will not result in an equal and continuous level of performance improvement, the future of AI models and what are the biggest dangers that AI poses to society today.
⚒️ AI Scaling Myths, The Core Bottlenecks in AI Today & The Future of Models
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/01367980-e1cd-4d60-a373-c5a45dc4151a/20VC.jpg?t=1720729134)
20VC
🎥Watch the full episode here
📆 Published: August 2024
🕒 Estimated Reading Time: 4 mins. Time saved: 45 mins🔥
🚀 The Myth of Unlimited Scaling: Why Bigger Isn’t Always Better
Arvind argues that the common belief that AI will continue to grow exponentially — with models becoming larger and more powerful with every iteration — is a myth. In reality, the scaling of AI models is reaching its practical limits. As Arvind Narayanan pointed out, we may not see many more cycles of models increasing in size by orders of magnitude.
One key reason for this is the limitation of available data. These models are already trained on almost all the data companies can legally access. As a result, more compute power doesn’t always equate to better performance. While earlier AI models like GPT-4 relied on more data and larger architectures to improve, the trend is shifting.
Takeaways:
Existing models will be optimised for efficiency and accuracy rather than focusing on building ever-larger ones.
Smaller gains in model size are leading to diminishing returns; focus will shift to extracting more value from current capabilities.
🔄 Data Bottlenecks: Why It's the Real Limiting Factor
The excitement around AI often overlooks one fundamental truth: data is a bottleneck. While many people assume there’s still an abundance of untapped data — like YouTube’s 150 billion hours of video — the reality is that when you break this down into usable data, it’s not as vast as it seems.
And while synthetic data (artificially created data) has been touted as a potential solution, Narayanan warns that quality matters far more than quantity. Generating vast amounts of synthetic data might increase the dataset, but it often comes at the cost of quality, resulting in diminishing returns.
Takeaways:
Data quality will be prioritised over quantity. A small but rich dataset is more effective than an enormous one filled with noise.
Synthetic data might help in niche cases but is not a comprehensive solution to the growing data bottleneck problem.
💻 Compute Power: Does More Really Mean Better?
It’s tempting to assume that more compute power will continue to lead to better AI models. However, the days of dramatic improvements through more compute are numbered. As Narayanan notes, one of the key trends we’re seeing is the rise of smaller models that offer the same capabilities as their larger predecessors, but with a fraction of the compute power.
This shift toward smaller models is happening because the cost of deploying and running AI models is significant. While more compute still helps in some cases, the marginal gains are decreasing, and the economic burden of operating large models is pushing companies to rethink their approach.
Takeaways:
Models will continue to balance the cost of compute with actual performance gains.
Smaller, more efficient models are becoming the standard — lower operational costs without sacrificing capability.
Similarly businesses using AI should optimise for performance relative to cost, especially as they scale.
🛠 Product First, AI Second: The Pitfalls of AI Hype
One of the biggest mistakes AI companies have made in recent years is assuming that simply putting powerful models out into the world would be enough to drive success. Narayanan highlights that many developers believed AI was so special that they didn’t need to focus on traditional product-market fit.
But as with any tech product, AI must solve real problems for real users to be successful. Relying on the model’s capabilities without understanding how it fits into a user’s workflow is a critical misstep that many AI startups have made.
Takeaways:
AI alone is not a product. Always start with the user problem you're solving.
Build products that integrate AI in ways that deliver tangible value.
Focus on product-market fit and user needs, rather than being captivated by AI’s potential capabilities.
📈 The Future of AI Models: Smaller, Faster, and More Efficient
As compute and data become bottlenecks, the future of AI lies in smaller, more efficient models. These models are not only cheaper to run but also more flexible, enabling on-device deployment, which brings significant benefits in terms of privacy and speed.
Smaller models also mean that the cost barrier for startups and smaller enterprises to use AI will decrease. It’s no longer about who can afford to train the largest models but who can optimise their AI solutions to deliver value cost-effectively.
Takeaways:
Smaller models are creating opportunities for startups to compete.
Consider on-device AI deployment for better privacy and efficiency.
Focus on optimising AI models for cost-effective operations without sacrificing performance.
🔮 What’s Next? Beyond the Hype Cycle
What does the future hold for AI? Narayanan predicts that real breakthroughs will come from new scientific ideas, not from scaling up existing models. While companies have made impressive strides with large models like GPT-4, the next wave of AI will likely focus on more specialised tasks, such as agents that can perform more complex, multimodal tasks beyond simple text-based responses.
Multimodal capabilities, where AI can understand and generate not just text but also images, audio, and video, offer exciting possibilities. But the most significant advancements are likely to come from areas we haven’t yet explored — from new architectures and learning methods rather than simply scaling up what already exists.
Takeaways:
The future of AI may lie in multimodal capabilities and agents, which can perform more diverse tasks.
Keep an eye on emerging trends in AI that offer new solutions beyond scaling current models.
AI is not magic, and its progress is slowing in areas where we’ve relied on brute force — more data, more compute, bigger models. As we reach the limits of scaling, the focus must shift to smarter, more efficient ways of deploying AI.
For founders and product managers, the message is clear: prioritise building products that people actually need. AI is a tool, not the solution itself. As we move into the next phase of AI’s evolution, the winners will be those who understand how to use this tool to solve real problems, not just those who can scale the biggest models.
Want to know more quickly? Just ask the episode below [web only]👇️🤯
or if you prefer, 🎥Watch the full episode here
📅Timestamps:
(00:00) Intro
(01:18) AI Hype vs. Bitcoin Hype: Similarities & Differences
(03:49) The Misalignment Between Compute & Performance
(08:10) Synthetic Data
(09:30) Creating Effective Agents Despite Incomplete Data
(12:00) Why Is the AI Industry Shifting Toward Smaller Models
(16:31) The Growing Gap Between AI Models & Compute Capabilities
(19:44) Predictions on the Timeline for AGI
(27:00) Policy Proposals for U.S. and European AI Regulation
(29:29) AI & Deepfakes: The Risk of Discrediting Real News
(35:59) Revolutionising Healthcare with AI in Your Pocket
(40:29) Is AI Job Replacement Fear Overhyped or Real?
(41:46) AI's Potential as a Weapon
(46:19) Quick-Fire Round
That’s a wrap.
As always, the journey doesn't end here!
Please share and let us know what you would like to see more or less of so we can continue to improve your Product Tapas. 🚀👋
Alastair 🍽️.
Reply