All digests
Issue #1

AI Growth Stack Weekly #1

Welcome to the first AI Growth Stack Weekly. This week we're covering the foundations: RAG pipelines that actually work, how growth loops compound with AI, and the embedding models worth your attention in 2026.

1

The 7 RAG Pipeline Mistakes Everyone Makes

Most RAG implementations fail not because of the LLM, but because of bad chunking, wrong embedding models, or missing re-ranking. We break down the seven most common mistakes and how to fix each one.

2

Building Growth Loops with LLMs

AI-powered growth loops create self-reinforcing cycles where each user's activity generates inputs that attract more users. Learn the three loop archetypes that work best for AI-native products.

3

Embedding Models Benchmark 2026

We benchmarked OpenAI, Cohere, Voyage, and BGE models on real-world retrieval tasks. The results surprised us — the most expensive model isn't always the best for your use case.

4

Understanding Embedding Models in 2026

A comprehensive overview of the embedding landscape, from choosing the right model dimensions to evaluating multilingual performance and cost trade-offs.

5

Hacker News: RAG is Eating the World

A great discussion thread on the current state of RAG adoption in production, with insights from engineers at Notion, Linear, and Vercel on what's actually working.

Issue #2