Online experimentation, or A/B Testing, is the gold standard for measuring the effectiveness of changes to a website. But while A/B Testing can appear simple, there are a number of issues that can complicate an analysis. We’ll cover 10 best practices that will help you avoid common pitfalls, whether your company is just getting started with testing or you’ve had a system established for years.
A classic A/B testing example you might have heard is changing the color of a button on a website. You randomly show 50% of people the old button, 50% of people the new, and measure the click or purchase rate to see if they improved. Sounds pretty simple right? Why exactly do you need a data scientist to analyze experiments?
Unfortunately, “generating numbers is easy; generating numbers you should trust is hard!” If you’ve been thinking about starting A/B testing at your company, have been doing it for a while but aren’t sure you’re following best practices, or are simply curious, this talk is for you. We don’t assume any level of A/B testing knowledge, statistics background, or programming experience.
We’ll begin with an introduction to A/B testing using actual examples from Etsy and DataCamp. Then we’ll dive into the main part of the talk, covering 10 guidelines for proper A/B Testing. Finally, we’ll end with a list of great resources so you can continue learning.
10 Guidelines 1. Have one key metric for your experiment. 2. Use that key metric do a power calculation 3. Run your experiment for the length you’ve planned on 4. Don’t run tons of variants 5. Don’t try to look for differences for every possible segment 6. Check that there’s not bucketing skew 7. Don’t overcomplicate your methods 8. Be careful of launching things because they “don’t hurt” 9. Have a data scientist or analyst involved in the whole process 10. Focus on smaller, incremental tests that change one thing at a time