A big mistake in testing is to overlook variables inside and outside of the test that impact results. In an ideal test, the only variables would be the ones you are testing on your page. That usually isn’t possible though, but as long as you account for them in your analysis, you will get correct and actionable information.
If you test a seasonal page, then the optimal page you get for that season, probably won’t perform when the season ends. By not paying attention to those kind of variables, you are setting yourself up into thinking you’ve found the optimal page. The same type of mistake is made by grouping e-mail, print, SEM campaigns and event traffic, unless you know they react the same to your changes.
Even within segments, there might be more segments to uncover. Your only limitation should be traffic; don’t segment so granular that you can’t run a decent sized test in a decent amount of time.
One of my clients doesn’t get a lot of traffic, but the traffic he does get is very distinct. One converts in the single digits and the other converts in the teens. Although combining them would get me more data, it would be very confused data since they convert so differently.
A few things to look out for:
- The ad or offers visitors see beforehand
- Interactions between your factors (if you aren’t testing interactions)
- Technical problems
- Problems that occur before or after the tested page
A note about the last bullet, the problems can range from a technical problem to a problem with the overall funnel. If people get different experiences in the funnel that drastically impact whether they convert or not, it can add a noise to your test. Some examples are different checkout processes for registered and non-registered users or users being inelligible for service.
The purpose of testing is to find out if a certain element performs well under the conditions you provide. If you aren’t paying attention to all the conditions, then the results you derive will be incorrect without you knowing.