Whether you are releasing new functionality or running an experiment, Split is constantly analyzing the change in your customer metrics to determine whether the impact is statistically conclusive and not simply happening by chance.
By configuring your statistical settings you can ensure the analyses are run in the way which best suits your use cases. For example you can adjust parameters such as the significance threshold, which controls your chances of seeing false positive results, and set it to reflect the balance between confidence and time to significance that is right for you.
There are two types of statistical settings: monitor settings, which impact how your alert policies are analyzed; and experiment settings, which impact how your metric impact results are analyzed.
You can set organization-wide defaults by configuring your organization's statistical settings in the Admin Settings section. These settings will be used to analyse all alert policies and uncustomized splits, and they will be applied by default for all newly created splits.
You can also configure the experiment settings of your splits individually by customizing their settings. The settings used for a particular environment's results within a particular split can be individually customized.
When you customize your experiment settings at the split-level, the new settings will immediately apply to all versions of the split in the particular environment you customized only. For example, you could set a particular split to use different experiment settings in the staging and production environments if desired. Customizing experiment settings at the split level will have no impact on your other splits or any of your alert policies.
Monitor settings can only be configured at the organizational level. This is to ensure each alert policy is always analyzed against the same statistical settings, maintaining consistency across any alerts that may be raised.
Monitor Settings
Monitor Window
Split allows you to configure how long you would like your metrics to be monitored for and alert you if a severe degradation has occurred. By default the monitoring window will be set to 24 hours from a split version change. You will be able to select from a range of different monitoring windows , from 30 minutes to 28 days.
With configurable monitoring windows you can customize your monitoring period based on your teams release strategy. Adjust your monitoring window to 24 hours if you are turning on a feature at night with low traffic volumes and you want to monitor through the morning when traffic begins to increase or to 30 minutes if you are expecting high traffic volumes within the first 30 minutes of a new split version. Find out more about choosing your degradation threshold based on your expected traffic here.
Monitor Significance Threshold
The monitor significance threshold limits your chances of receiving a false alert. A lower significance threshold means we will wait until there is more evidence of a degradation before firing an alert. Hence a lower significance threshold reduces the chance of false alerts, but this comes at the cost of increasing the time it takes for an alert to be fired when a degradation does exist.
A commonly used value for the monitor significance threshold is 0.05 (5%), which means that, for each alert policy and for each version update, there is at most a 5% chance of seeing an alert when the true difference between the treatments is less than the degradation threshold set up in your metric’s alert policy.
You can configure the monitor significance threshold independently from the default significance threshold used for your calculating your metric results. Changing this setting will only impact your monitoring alerts and not the metric results.
Statistical Approach used for Monitoring Window
For alert policies, rather than testing for statistically significant evidence of any impact as we do for our standard metric analyses, we test for significant evidence of an impact larger than your chosen degradation threshold, in the opposite direction to the metric’s desired direction.
In order to control the false positive rate during the monitoring window we adjust the significance threshold that the p-value must meet before an alert is fired. We divide the threshold by the number of times we will check for degradations during the selected monitoring window. For example, if your monitoring window is 30 minutes, we estimate that we will run 5 calculations during that time. In this case, if your monitor significance threshold is set to 0.05 in your statistical settings, the p-value would need to be below 0.01 (0.05 / 5) for an alert to fire in this time window.
This adjustment allows us to control the false positive rate and ensure that the likelihood of getting a false alert, across the whole of the monitoring window, is no higher than your chosen monitor significance threshold. The level of adjustment is dependent on the duration of the monitoring window and how many calculations will run during that time.
This adjustment means that a longer monitoring window will have slightly less ability to detect small degradations at the beginning of your release or rollout, but in most cases this will be far outweighed by the gain in sensitivity due to the larger sample size you accrue over a longer window.
Experiment Settings
Default Significance threshold
The significance threshold is a representation of your organization's risk tolerance. Formally, the significance threshold is the probability of a given metric calculation returning a statistically significant result when the null hypothesis is true (i.e. when there is no real difference between the treatments for that metric).
A higher significance threshold will allow you to reach statistical significance faster when a true difference does exist, but it will also increase your chances of seeing a false positive when no true difference exists. Conversely, a lower significance threshold will reduce your chances of seeing false positive results but you will need a larger difference between the two treatments, or a larger sample size, in order to reach statistical significance.
A commonly used value for the significance threshold is 0.05 (5%). With this threshold value, a given calculation of a metric where there was no true impact has a 5% chance of showing as statistically significant (i.e. a false positive). If Multiple Comparison Corrections have been applied it will mean there is a 5% chance of a statistically significant metric being a false positive.
Minimum Sample Size
The minimum number of samples required in each treatment before we will calculate statistical results for your metrics. This number must be at least 10, for most situations we recommend using a minimum sample size of 355.
For the t-test used in Split's statistics to be reliable, the data must follow an approximately normal distribution. The central limit theorem (CLT) shows that the mean of a variable has an approximately normal distribution if the sample size is large enough.
You can reduce the default minimum sample size of 355 if you need results for smaller sample sizes. For metrics with skewed distributions your results may be less reliable when you have small sample sizes.
Note that this parameter does not affect your monitoring alerts. For monitoring we always require a minimum sample size of 355 in each treatment before we will fire an alert.
Power threshold
Power measures an experiment's ability to detect an effect, if possible. Formally, the power of an experiment is the probability of rejecting a false null hypothesis.
A commonly used value for statistical power is 80%, which means that the metric has 80% chance of reaching significance if the true impact is equal to the minimum likely detectable effect. Assuming all else is equal, a higher power will increase the recommended sample size needed for your split. In statistical terms, the power threshold is equivalent to 1 - β.
Experimental review period
The experimental review period represents a period of time where a typical customer visits the product and completes the activities relevant to your metrics. For instance, you may have different customer behavior patterns during the course of the week or on the weekends (set a seven day period).
A commonly used value for experimental review period is at least 14 days to account for weekend and weekly behavior of customers. Adjust the review period to the most appropriate option for your business, you will be able to select 1,7,14 or 28 days.
Multiple Comparison Corrections
Analyzing multiple metrics per experiment can substantially increase your chances of seeing a false positive result if not accounted for. Our multiple comparison corrections feature applies a correction to your results so that the overall chance of a significant metric being a false positive will never be larger than your significance threshold. For example, with the default significance threshold of 5%, you can be confident that at least 95% of all of your statistically significant metrics reflect real, meaningful impacts. This guarantee applies regardless of how many metrics you have.
With this setting applied, the significance of your metrics, and their p-values and error margins, will automatically be adjusted to include this correction. This correction will be immediately applied to all tests, including previously completed ones. Learn more about our multiple comparison corrections here.
Recommendations and trade-offs
Be aware of the trade-offs associated with changing the statistical settings. In general, a lower significance threshold increases the number of samples required to achieve significance. Increasing this setting decreases the number of samples and the amount of time needed to declare significance, but may also increase the chance that some of the results are false positives.
As best practice, we recommend setting your significance threshold to between 0.01 and 0.1. In addition, we recommend an experimental review period of at least 14 days to account for weekly use patterns.
Change settings
Organization wide settings
Navigate to Admin Settings > Statistical Settings. After you adjust your settings, click Save.
Note
Changing your statistical settings instantly affects your entire organization. All alert policies will be analyzed against these new settings in future. All splits which are not customized (i.e. those for which the "Always use organization wide settings" checkbox is checked in their Experiment Settings) will also be analyzed using these new settings. For example, if your experiment is showing metrics as having a statistically positive impact at a .05 significant threshold, and you change your significance threshold from 0.05 to 0.01, the next time you load your metrics impact page you may see that metrics are no longer marked as having a significant impact.
Split level settings
Whilst viewing the split for which you wish to customize the settings, navigate to Metrics impact > Experiment settings.
Ensure that you have the right environment selected, or change the selection in the environment drop down in the top section in order to customize a different environment.
To customize the settings the "Always use organization wide settings" checkbox must be unchecked, otherwise the organization wide settings will be used to analyse results.
After you adjust your settings, click Save.
Changing the experiment settings for a split instantly affects all versions of that split in the environment for which you customized the settings. Versions in other environments, other splits in your organization, and all alert policies will not be affected by your customization.
Note
Customizations apply to specific environments; if your split has multiple environments each must be customized separately. When the "Always use organization wide settings" checkbox is checked, the settings for this split will update whenever your organization wide settings are changed. When this is not checked, the settings will no longer reflect any changes to your organization wide settings.
Comments
0 comments
Please sign in to leave a comment.