When an experiment starts, metric cards are calculated every 15 minutes. You can see the last time the metrics for an experiment were calculated under Summary of key and organizational metrics on the Metrics Impact tab.
The time between calculations extends by roughly an hour a day (in practice it’s a little less). We do this because, for example, on day 5 the traffic for the previous 15 minutes is not likely to show a material difference. We have it on our roadmap to provide an ETA for next calculation for any experiment you are running.
In addition, as a best practice you should establish experimental review periods. Making conclusions about the impact of your metrics during set experimental review periods will minimize the chance of errors and allow you to account for seasonality in your data.
Perhaps, you see a spike in data on certain days of the week. It would be against best practice to make your product decisions based on the data observed only on these days. Or the key event, such as arriving for a restaurant reservation, may not happen until a few weeks after the impression.
We will always show your current metrics impact, but discourage you from making conclusive product decisions outside of review periods.
Experimental review periods are a common practice for sophisticated growth and experimentation teams. For some experimentation teams, no decisions can be made until the experiment has run for a set number of days.