Back to glossary

A/B Testing

A controlled experiment comparing two or more variants to determine which performs better on a defined metric, using statistical methods to ensure reliable results.

A/B testing is the gold standard for measuring the causal impact of product changes. By randomly splitting users into groups that see different variants, you isolate the effect of your change from all other variables — something observational analysis can't do.

The fundamentals: define your primary metric, calculate required sample size (based on desired minimum detectable effect and statistical power), randomly assign users, run the test until you reach significance, and make a decision. Common pitfalls include peeking at results early (inflates false positive rate), testing too many metrics (multiple comparison problem), and stopping at the first significant result (regression to the mean).

AI enhances A/B testing in several ways: multi-armed bandits that dynamically allocate traffic to winning variants, reducing opportunity cost; Bayesian methods that provide continuous confidence estimates instead of binary significant/not-significant decisions; and contextual bandits that personalize which variant each user sees based on their characteristics. The ideal experimentation platform combines traditional statistical rigor for high-stakes tests with AI-powered methods for rapid optimization.

Related Terms

Further Reading