What if you have experiments you want to run simultaneously, but independently, so a given user will only be exposed to one out of a number of experiments? One approach is to create a single feature flag with a treatment for each of the features you want to try out, with a single control to compare against. This technique though only works if all of the features to be tested are available simultaneously. You may want more individual control of the rollout of the features, while still being concerned that the various combinations of treatments from multiple experiments may interfere with one another and you want to eliminate the possibility of this interference.
You can ensure that a particular user will only be exposed to one of a number of separate experiments by creating a parent feature flag and then using Split's dependency matcher in a targeting rule in each of the experiments you wish to isolate from one another.
Let's run through an example where you have three experiments you want to run together, with users to your site being exposed to only one of the three. Note that this presumes that for each of these experiments there is a default off treatment that can be assigned to users not part of the experiment. For the sake of our example, assume that these experiments involve your product detail page (pdp). We'll call the three experiments pdp-test-a, pdp-test-b, and pdp-test-c.
The parent feature flag
The first step is to create a parent feature flag named pdp-test-parent with four treatments, one for each of the experiments, as well as a not-in-experiments treatment to be used if for some reason you want to pull the plug on all the experiments at once. The parent feature flag and the dependent feature flags (experiments) must all be of the same traffic type. We'll create a percentage-based targeting rule to divide users into three equal sized groups, but you could divide them differently if you'd like.
The not-in-experiments treatment is not part of the targeted percentages, but will be used as the treatment for any users not part of the parent feature flag's traffic allocation, or for all users if the feature flag is killed.
Note that you should not have any code actually calling getTreatment for the parent feature flag. It exists solely for use in targeting rules in the dependent feature flags.
Dependent feature flag
After the parent feature flag is created, you will create a dependent feature flag for each of the experiments you wish to isolate. Each of these feature flags will include a targeting rule for usersassigned to that experiment by the parent feature flag. Assuming that pdp-test-a is a 50/50 test between on and off treatments, the targeting rule would look like
This specific targeting rule uses the is in feature flag matcher to target users who are assigned to pdp-test-a in the pdp_test_parent feature flag to a 50/50 percentage of on and off. The default rule assigns any users who do not match the specific rule to the off treatment.
Once your experiment is running you will need to select the specific targeting rule on the Metrics impact tab in order to see results, because that's the rule under which people will have been included in the experiment.
Limitations and considerations
You should be aware of the following limitations and considerations when using this technique.
- The beginnings and ends of the dependent experiments must be similar. If you decide to terminate one of the experiments early, there is no way to return the users allocated to that experiment to another.
- Modifications made to the parent feature flag while the experiments are running could invalidate results without resetting the versions of the dependent feature flags. It is recommended that you include in the description of the parent feature flag a warning against making changes without consulting the owners of the running experiments.
- Since this technique divides your total site traffic amongst a number of experiments you will need to take the smaller amount of traffic into account when calculating how long an experiment will need to run in order to provide evidence for the desired minimum likely detectable effect.
- You should not re-use the divisions created by the parent feature flag for new experiments without reallocating the parent feature flag because otherwise you will be disrupting the randomization of your users into the new experiment.
Within these constraints, this technique is a helpful way to direct separate populations of your users to different experiments.