Introduction
What if you have experiments you want to run simultaneously, but independently, so a given user will only be exposed to one out of a number of experiments? One approach is to create a single split with a treatment for each of the features you want to try out, with a single control to compare against. This technique though only works if all of the features to be tested are available simultaneously. You may want more individual control of the rollout of the features, while still being concerned that the various combinations of treatments from multiple experiments may interfere with one another and you want to eliminate the possibility of this interference.
You can ensure that a particular user will only be exposed to one of a number of separate experiments by creating a parent split and then using Split's dependency matcher in a targeting rule in each of the experiments you wish to isolate from one another.
Let's run through an example where you have three experiments you want to run together, with users to your site being exposed to only one of the three. Note that this presumes that for each of these experiments there is a default "off" treatment that can be assigned to users not part of the experiment. For the sake of our example, assume that these experiments involve your product detail page (pdp). We'll call the three experiments pdp-test-a, pdp-test-b, and pdp-test-c.
The parent split
The first step is to create a parent split named pdp-test-parent with four treatments, one for each of the experiments, as well as a "not-in-experiments" treatment to be used if for some reason you want to pull the plug on all the experiments at once. The parent split and the dependent splits (experiments) must all be of the same traffic type. We'll create a percentage-based targeting rule to divide users into three equal sized groups, but you could divide them differently if you'd like.
The not-in-experiments treatment is not part of the targeted percentages, but will be used as the treatment for any users not part of the parent split's traffic allocation, or for all users if the split is killed.
Note that you should not have any code actually calling getTreatment for the parent split. It exists solely for use in targeting rules in the dependent splits.
Dependent splits
After the parent split is created, you will create a dependent split for each of the experiments you wish to isolate. Each of these splits will include a targeting rule for users assigned to that experiment by the parent split. Assuming that pdp-test-a is a 50/50 test between on and off treatments, the targeting rule would look like
This specific targeting rule uses the "is in Split" matcher to target users who are assigned to pdp-test-a in the pdp_test_parent split to a 50/50 percentage of on and off. The default rule assigns any users who do not match the specific rule to the off treatment.
Once your experiment is running you will need to select the specific targeting rule on the Metrics impact tab in order to see results, because that's the rule under which people will have been included in the experiment.
Limitations and considerations
You should be aware of the following limitations and considerations when using this technique.
- The beginnings and ends of the dependent experiments must be similar. If you decide to terminate one of the experiments early, there is no way to return the users allocated to that experiment to another.
- Modifications made to the parent split while the experiments are running could invalidate results without resetting the versions of the dependent splits. It is recommended that you include in the description of the parent split a warning against making changes without consulting the owners of the running experiments.
- Since this technique divides your total site traffic amongst a number of experiments you will need to take the smaller amount of traffic into account when calculating how long an experiment will need to run in order to provide evidence for the desired minimum likely detectable effect.
- You should not re-use the divisions created by the parent split for new experiments without reallocating the parent split because otherwise you will be disrupting the randomization of your users into the new experiment.
Within these constraints, this technique is a helpful way to direct separate populations of your users to different experiments.
Comments
0 comments
Please sign in to leave a comment.