Experiment Velocity
The rate at which an organization designs, launches, analyzes, and acts on experiments, typically measured as the number of experiments concluded per unit time, reflecting the speed of the organization's learning and iteration cycle.
Experiment velocity is a meta-metric that measures the health and throughput of an organization's experimentation program. While individual experiments measure specific product changes, experiment velocity measures the experimentation capability itself. High experiment velocity means more hypotheses tested, more learnings generated, and more opportunities to discover improvements. For growth teams, experiment velocity is often the most important driver of long-term growth because the rate of learning compounds: an organization running 100 experiments per quarter will discover and ship more improvements than one running 10, all else being equal. Companies like Booking.com (running over 1000 concurrent experiments), Netflix, and Microsoft have built competitive advantages through extreme experiment velocity.
Experiment velocity is affected by multiple factors across the experimentation lifecycle. Ideation velocity depends on the team's ability to generate testable hypotheses, which is driven by access to data, customer insight, competitive analysis, and creative problem-solving. Design velocity depends on having experiment templates, automated power analysis, and clear guidelines that reduce the time from hypothesis to launch-ready experiment. Execution velocity depends on feature flagging infrastructure, experimentation platform capabilities, and engineering resources for implementing treatments. Analysis velocity depends on automated statistical analysis, self-serve dashboards, and clear decision criteria. Decision velocity depends on organizational alignment on metrics, decision-making authority, and the speed of ship/no-ship decisions after analysis is complete. Bottlenecks at any stage limit overall velocity.
Teams should measure and track experiment velocity as a key performance indicator for the growth function. Useful sub-metrics include experiments launched per week, median time from hypothesis to launch, median experiment duration, median time from experiment conclusion to ship decision, and the percentage of experiments that produce actionable results. Common pitfalls include optimizing for quantity over quality (running many trivial experiments to inflate the count), not investing in the infrastructure and tooling that enables sustainable velocity, and creating organizational bottlenecks through excessive review processes. The goal is not to maximize the number of experiments but to maximize the rate of validated learning, which requires a balance of quantity, quality, and the speed of acting on results.
Advanced approaches to improving experiment velocity include automation of the entire experiment lifecycle (auto-generating power analyses, auto-configuring experiment parameters, auto-analyzing results), self-serve experimentation platforms that allow product managers to run experiments without engineering support for certain types of changes, experiment templates that standardize common experiment patterns, and portfolio management approaches that optimize the allocation of experimentation resources across teams and product areas. Some organizations use experiment velocity as an input to their growth model, estimating how many winning experiments are needed per quarter to hit growth targets and working backward to the required experimentation throughput. Machine learning can accelerate velocity by predicting which experiments are most likely to succeed, enabling better prioritization of the experiment backlog.
Related Terms
Growth Experimentation Framework
A structured organizational process for systematically generating, prioritizing, running, and learning from experiments across the entire user lifecycle, designed to maximize the rate of validated learning and compound the impact of product improvements.
Experiment Review Board
A cross-functional governance body that reviews experiment designs before launch and results before ship decisions, ensuring statistical rigor, alignment with organizational metrics, and prevention of common methodological errors.
Experiment Documentation
The systematic recording of experiment hypotheses, designs, configurations, results, and learnings in a structured, searchable format that preserves institutional knowledge and enables evidence-based decision-making across the organization.
Multivariate Testing
An experimentation method that simultaneously tests multiple variables and their combinations to determine which combination of changes produces the best outcome, unlike A/B testing which typically varies a single element at a time.
Split Testing
The practice of randomly dividing users into two or more groups and exposing each group to a different version of a product experience to measure which version performs better on a target metric, commonly known as A/B testing.
Holdout Testing
An experimental design where a small percentage of users are permanently excluded from receiving a new feature or set of features, serving as a long-term control group to measure the cumulative impact of product changes over time.