Before implementing a change in strategy, marketers may want to experiment on a smaller scale first. So how do you ensure your experiments are generating clear and insightful results? Doris Claesen, Google APAC’s Experiment with Google Ads lead, breaks down a few best practices to keep in mind when designing and running experiments as well as common pitfalls to avoid along the way.
Experimentation should be a critical part of any successful marketing strategy. Relying on proven results — not opinion — is how leading marketers stay agile in dynamic markets, craft more effective campaigns at scale, and most importantly, identify the true impact of their efforts on business results.
Given the endless number of variables marketers can experiment with — whether it’s bidding, audiences, channels, or creative — and the fact that success means different things to different brands, it can be challenging to identify exactly which change in strategy affected the campaign’s results and to what extent. Measuring that impact and generating insightful results from experiments takes careful preparation before launch.
Relying on proven results — not opinion — is how leading marketers stay flexible in dynamic markets, craft more effective campaigns at scale, and most importantly, identify the true impact of their efforts on business results.
Regardless what tools and tactics you want to experiment with, there are a few key principles of running effective marketing experiments. While many of these may seem obvious, they’re also sometimes overlooked and often the source of common mistakes. Here’s how to avoid them:
1) Set a clear hypothesis — don’t test for the sake of testing
Put simply, your hypothesis states the reason you’re running the experiment in the first place. To keep your experiment as focused as possible, frame your hypothesis as a clear, unambiguous question that’s tied to your specific business goal. For example, “Does enhancing our mobile site speed drive incremental online conversions?”
That’s the exact question Paisabazaar asked before testing Accelerated Mobile Pages (AMP) to create a faster, more convenient user experience. After the brand used Drafts and Experiments to run two parallel search campaigns — leading one group of users to AMP and the other to standard landing pages — it unlocked a 10% lift in incremental mobile conversions thanks to 60% faster load times on AMP.
Part of setting your hypothesis is also deciding which actions you’ll take based on the outcome. You might roll out a similar strategy on a larger scale if the hypothesis is validated or stick with the same approach if it’s not (or ideally, try another experiment to find what does drive the results you’re looking for). In Paisabazaar’s case, mobile speed had a direct impact on business results, and so the brand started launching AMP across more of its site pages.
2) Identify clear and measurable success metrics — avoid a wait-and-see approach
Pinpoint the right metrics that make sense for your business and set them in stone before the experiment launches. Changing things on the fly will only take away from the validity of any insights you uncover.
It’s important to avoid muddling your results by focusing on too many metrics. Ask yourself which ones are most relevant and measurable and what level of impact is needed to consider the experiment a success. Keep in mind that the metrics you choose should always serve to validate (or invalidate) your hypothesis.
Ask yourself which metrics are relevant and measurable and what level of impact is needed to consider the experiment a success.
Online furniture retailer HipVan’s goal was simple: drive more foot traffic to its first-ever showroom. As a digital-first brand, HipVan decided to run online video ads alongside its existing search campaigns and see how an additional online channel would impact offline sales. The brand not only saw 5X return on ad spend from a surge in in-store visitors, but organic searches for HipVan also jumped by 528%.
3) Design and execute with care — don’t expect 100% outcome with 50% effort
How you set up your experiments is just as important as how you carry them out. You’ll only get out as much as you put in. Here’s how to avoid some common pitfalls when setting up a new experiment:
Identify comparable control versus test groups
To measure a change in behavior or results, an experiment compares the impact of exposing one group of people to any given variable while the other group remains unaffected. It’s crucial that both groups are clearly defined before the experiment is launched and that the sample size for each is equal, comprising randomly selected consumers.
Keep in mind that both groups should consist of people who would act similarly in a normal campaign environment to avoid bias. And together, the groups should represent a subset of the audience you intend to reach based on the results of the experiment. A good rule of thumb is that each group should be interchangeable — the control group members could’ve been eligible for the test but simply weren’t selected for this particular experiment.
Run real-time A/B tests
In practice, marketers often split control and test groups by audience or geography to enable experiments to run head-to-head in real time. These types of real-time experiments are preferred over pre-post tests, which carry a higher risk of being affected by variables that lead to incorrect or unfair comparisons.
While there’s no set benchmark, A/B tests often require a minimum amount of statistical difference to be considered significant. Test a small percentage of traffic at a time — keep in mind that your experiment will interrupt the user experience for a subset of your consumers.
Test one variable at a time
Avoid the urge to change everything you’re keen to test at once. Instead, change one variable at a time in your test campaign to help pinpoint the exact change that influenced user behavior. While they’re more complex and take longer to conclude, multiple variant tests (i.e., A/B/C/D tests) are also possible, but in that case it’s even more important to only test one variable across each campaign.
Consider how WeDo, an online marketplace in New Zealand, used YouTube’s Video Experiments tool to raise awareness as a newcomer brand. Audience segments were the only variable across two test groups. WeDo served the same video ad creative to different audiences based on their hobbies and interests and ultimately discovered an effective way to boost awareness and consideration by more than 50% from its most relevant consumers.
4) Take action based on the results — don’t let your experiment go to waste
Assuming you’ve taken care of the previous steps, now comes the easy part: learning from the results. Whether your hypothesis was proven or not, you’ll be left with new insights or best practices for conducting future experiments and delivering more effective campaigns.
Read what a few brand leaders learned from conducting their own experiments and how they used those lessons:
Featured Quotes
"Using Geo Experiments helped us prove beyond a doubt that generic search drives incremental conversions online and in-app. It also helped us realize the value of using a non-last-click-attribution model."
"Now that we can accurately attribute incremental impact of online advertising on offline arrivals to New Zealand, we’re able to further optimize our creative, channels, and audiences across the consumer journey."
"We’re planning to invest in using online video in specific regions to connect with even more global shoppers. Moving forward, we’ll also continue using comprehensive measurement methods to accurately assess our performance."
Learning from the unexpected
Before you dive into your own experiments, keep in mind that a successful experiment isn’t necessarily the same as a successful campaign. Even if the experiment results aren’t what you initially expected, it’s not a failure or a loss — it’s an opportunity to learn.
And sometimes, unexpected results can lead to previously underestimated or untapped opportunities. So keep an open mind, embrace an experimental mindset, and take action in making better campaign decisions backed by proven results.