Your metrics cards show different states depending on the label selected, the traffic distribution, the data available, and whether a baseline is selected.
Here is a quick summary of each card's state.
Statistically positive
A metric card returns with the message Statistically positive impact if the metric moved significantly in the desired direction. The card displays this green state if:
- The change matches the desired direction (for example, you wanted this metric to increase, and it did when comparing the treatment to the baseline).
- The p-value is less than the defined significance threshold of 0.05. In this case, there is evidence that the treatment selected had a different impact on the metric than the baseline treatment selected.
- The metric had sufficient power. Data was collected from enough users to satisfy the desired minimum detectable effect and default power threshold.
Statistically negative
A metric card returns with the message Statistically negative impact if the metric moved significantly in the undesired direction. The card displays this red state if:
- The change does NOT match the desired direction (for example, you wanted this metric to increase and it decreased when comparing the treatment to the baseline).
- The p-value is less than the defined significance threshold of 0.05. In this case, there is evidence that the treatment selected had a different impact on the metric than the baseline treatment selected.
- The metric had sufficient power. Data was collected from enough users to satisfy the desired minimum detectable effect and default power threshold.
Statistically inconclusive
A metric card returns with the message Statistically inconclusive if the metric impact is unclear. The card displays this yellow state if:
- There was little evidence to believe that there was an impact. In this case, the lower the p-value, the more evidence of an impact. Because the p-value is greater than the defined significance threshold of 0.05, there is little evidence to believe that the treatment selected had a different impact on the metric than the baseline treatment selected.
- The observed effect is less than the defined effect threshold. In this case, the observed effect does not meet the desired minimum detectable effect.
Statistically not possible
A metric card returns with the message Statistical comparison not possible when one of the conditions below applies. If the troubleshooting tips do not resolve your issue, and you continue to have problems with your metrics, contact us at support@split.io.
Statistical comparison not possible because | Troubleshooting tips |
---|---|
Not possible because metric grouped across users. | Metrics that are calculated across your customers and not normalized per customer (or the experimental unit) do not show a statistical comparison. These metrics are great for understanding overall trends, but if you want to see the statistical impact per customer you should update the metric definition. |
Not possible because viewing metric for a single treatment. Select treatment to compare. | Metrics that are displayed for a single treatment do not show a statistical comparison. Select a baseline treatment for comparison to see the statistical comparison where available. |
Needs more data
A metric card returns with the message Needs more data if the impact of the treatment is not yet statistically conclusive and Split needs more samples. Each metrics card indicates how many more days you need to run your experiment to achieve statistically significant results in either direction with the current observed effect size. The effect size is calculated as the difference between the means of two treatments as of the last update time. In addition, on hover, the card shows the effect size required with the given sample size at that point of time to show a significant impact. This card is grayed out until the metrics card has met the minimum sample size requirements or effect size.
Not available
A metric card returns with the message Metric not available for several reasons, outlined below. If the troubleshooting tips do not resolve your issue, and you continue to have problems with your metrics, contact us at support@split.io.
Not available because | Troubleshooting tips |
---|---|
The calculation has not yet run for this split. | The calculation runs within the first 15 minutes of a change in the split's version. If it has been 15 minutes and you are still seeing this issue, contact support@split.io. |
This metric was created after the metrics impact was last updated. | The duration between updates scales with the length of the version. At the beginning of a version, calculations are run every 15 minutes for definitions updated in the past hour. The time between these calculations increases incrementally through the duration of a version. If a split has been running for more than 12 days, it can be up to 48 hours between calculations and up to 72 hours after the version of a split has been running for more than 48 days and so on. The older the experiment, the less likely that the data collected in the last few hours can move the metric. If you have created a metric and need to see the updated metrics impact, contact support@split.io and we can assist you. |
This metric definition was modified after the metrics impact was last updated. | The duration between updates scales with the length of the version. At the beginning of a version, calculations are run every 15 minutes for definitions updated in the past hour. The time between these calculations increases incrementally through the duration of a version. If a split has been running for more than 12 days, it can be up to 48 hours between calculations and up to 72 hours after the version of a split has been running for more than 48 days and so on. The older the experiment, the less likely that the data collected in the last few hours can move the metric. If you have modified a metric and need to see the updated metrics impact, contact support@split.io and we can assist you. |
No users have an impression for at least one of the treatments. | This message appears if you are comparing two treatments and one of the treatments has no samples. Ensure that the version and targeting rule you selected should be serving traffic to both treatments. |
No users have met the metric's filter by condition for at least one of the treatments. | This message appears if the metric has a filter criteria in its definition (for example, measure this metric for users who have clicked this button). Ensure that the customers in treatment and firing the track event used in the metrics calculation have also fired the filter event. |
This metric is an average, but no events associated with this metric have been received for any user for at least one of the treatments. | This message appears when you are looking at the average value and Split has not received any events to take an average on. Ensure that you are sending the event that you want the average value for. |
This metric is a ratio, but no events associated with the denominator have been received for any user for at least one of the treatments. | This message appears when you are calculating the ratio of two events and Split has not received any events for the denominator. Ensure that you are sending your events properly. |
No users have an impression for the treatment. | This message appears if you are looking at a single treatment with no baseline and there are no samples. Ensure that the version and targeting rule you selected should be serving traffic to the treatment you selected. |
The calculation did not return. Our support engineering team has been notified. | The support team has been notified and our alerts have been triggered! Stay tuned for support. |
Comments
0 comments
Please sign in to leave a comment.