Primacy Effect
A temporary depression in user performance or engagement when encountering a changed experience, caused by the disruption of established habits and mental models, which can make a genuinely beneficial treatment appear harmful in the short term.
The primacy effect, sometimes called the change aversion or learning effect, is the opposite of the novelty effect. When users have established habits and mental models for how a product works, any change, even an objectively better one, disrupts their workflow and temporarily reduces their efficiency. Users may struggle to find relocated features, take longer to complete familiar tasks, or express dissatisfaction with the new experience. If an experiment is analyzed during this adaptation period, a truly superior treatment may appear to perform worse than the control, leading teams to incorrectly reject beneficial changes. For growth teams, the primacy effect creates a systematic bias toward the status quo that can stifle innovation and prevent the adoption of improvements that would benefit users in the long run.
Detecting the primacy effect mirrors the approach for novelty effects but looks for the opposite pattern: a treatment effect that starts negative and improves over time. Plot the daily treatment effect and look for an upward trend. Segment analysis comparing new users (who have no established habits) versus existing users is particularly diagnostic: if new users show a positive treatment effect while existing users show a negative effect that diminishes over time, the primacy effect is the likely explanation. Regression models with a time-since-exposure interaction term can formally test whether the treatment effect improves with user adaptation. The key diagnostic distinction is that novelty effects affect engagement metrics (clicks, visits, time spent) while primacy effects tend to affect efficiency metrics (task completion rate, time to complete, error rate), though there is overlap.
Teams should account for primacy effects when testing changes to established workflows, navigation structures, or interface patterns that users have learned. Running experiments for longer durations (3-4 weeks minimum) allows the adaptation period to pass and reveals the steady-state treatment effect. If time constraints prevent long experiments, teams can weight the analysis toward later periods or toward new user segments. Another strategy is to provide a transition experience, such as tooltips highlighting what changed or a brief walkthrough, which can accelerate adaptation and reduce the magnitude of the primacy dip. However, this changes what you are testing: the treatment plus the transition aid rather than the treatment alone. Teams should document known primacy-sensitive experiments and their adaptation timelines to build institutional knowledge.
Advanced considerations include using survival analysis methods to model the adaptation process, estimating how long it takes for different user segments to reach steady-state performance with the new experience. Some teams implement graduated rollouts where existing users are given the option to try the new experience before being switched, using the self-selected early adopters as a proxy for steady-state behavior. Causal inference methods like instrumental variables can sometimes separate the primacy effect from the true treatment effect by exploiting exogenous variation in the timing or intensity of exposure. The interaction between primacy and novelty effects is also important: some changes may simultaneously increase exploration (novelty) while decreasing efficiency (primacy), creating complex time-varying treatment effects that require careful decomposition to interpret correctly.
Related Terms
Novelty Effect
A temporary change in user behavior caused by the newness of a feature or design change rather than its intrinsic value, where engagement metrics initially spike because users explore the new experience but then decay as the novelty wears off.
Long-Running Experiment
An experiment maintained for weeks, months, or even years beyond the standard analysis period to measure the long-term and cumulative effects of a treatment, capturing delayed impacts on retention, revenue, and user behavior that short-term experiments miss.
Holdout Testing
An experimental design where a small percentage of users are permanently excluded from receiving a new feature or set of features, serving as a long-term control group to measure the cumulative impact of product changes over time.
Multivariate Testing
An experimentation method that simultaneously tests multiple variables and their combinations to determine which combination of changes produces the best outcome, unlike A/B testing which typically varies a single element at a time.
Split Testing
The practice of randomly dividing users into two or more groups and exposing each group to a different version of a product experience to measure which version performs better on a target metric, commonly known as A/B testing.
Power Analysis
A statistical calculation performed before an experiment to determine the minimum sample size required to detect a meaningful effect with a specified probability, balancing the risk of false negatives against practical constraints like traffic and experiment duration.