Statistical Power
The probability that an experiment will correctly detect a real effect when one exists, determined by sample size, effect size, and significance level. Higher power reduces the risk of missing genuine improvements.
Statistical power is the probability of rejecting a false null hypothesis, meaning the likelihood your experiment will detect a real difference if one exists. An experiment with 80% power has a 20% chance of missing a genuine effect and incorrectly concluding there is no difference. Power depends on three factors: sample size, minimum detectable effect size, and significance level.
For growth teams, understanding statistical power prevents the common mistake of running underpowered experiments that waste time and lead to inconclusive results. AI-powered experimentation platforms automate power analysis and sample size calculations, but growth engineers should understand the underlying trade-offs. Increasing power requires either more users, which means longer experiments, or detecting only larger effects, which means missing small but potentially valuable improvements. Teams should conduct pre-experiment power analysis to determine how long each test needs to run, establishing clear stopping criteria before launch. Running underpowered experiments is worse than not experimenting at all because false negatives lead to incorrect conclusions about what works. The practical recommendation is to target at least 80% power for all experiments and to design tests around the minimum effect size that would justify implementation.
Related Terms
Event Tracking
The practice of recording specific user interactions within a digital product, such as clicks, form submissions, page views, and feature usage, as structured data events that can be analyzed to understand user behavior.
Event Taxonomy
A structured naming convention and classification system for analytics events that ensures consistency, discoverability, and usability of tracking data across teams, platforms, and analysis tools.
Funnel Analysis
The process of tracking and measuring user progression through a defined sequence of steps toward a conversion goal, identifying where users drop off and quantifying the conversion rate between each stage.
Conversion Rate Analytics
The systematic measurement and analysis of the percentage of users who complete a desired action out of the total who had the opportunity, applied across multiple conversion points throughout the user journey.
Drop-Off Rate
The percentage of users who leave a process or sequence at a specific step without completing the next step, the inverse of step-level conversion rate, used to identify friction points in user flows.
Cohort Analysis
A technique that groups users by a shared characteristic or experience within a defined time period and tracks their behavior over subsequent periods, revealing how user behavior evolves and differs across groups.