How do you create good, long-term account improvements through tests and experiments? For us it comes down to 3 factors:
Understanding your product, understanding your customers and creating hypotheses on how to connect the two
Designing new variants (tests) that have the highest chance of improving account performance
Driving the largest possible gains from those new variants
The challenge of course, is then trying to create as many ‘winning’ tests as possible. A common process for creating a new ad copy test often looks something like:
Draft ad copy variants
Share them with your client/team/manager
Select one that is expected to perform best
Launch the test
The problem with this process is that it can be highly subjective. It leaves your testing strategy open to all sorts of natural bias that people develop over time. Of course there are some benefits to heuristic analysis done by experts, but the challenge comes in scaling that ability.
We wanted to use a framework that was more robust. One that removed as much subjectivity as possible, and added experimentation rigor into the process.
We aimed to develop a system that ranks our test hypotheses, both for RSAs and ETAs.
This ability to use the framework for both ad types is key, as we know that while ETAs are currently used on many accounts, RSAs will become the only search ad type that you can create new ads with as of June, 2022.
Within this structure we have asked a range of questions about a given test hypothesis and ad copy that, once answered, transform into a score per hypothesis.
You can then use this to rank all of your testing options.
Give your test hypotheses more objective ratings (allowing the test with the highest chance of success to rank first)
Develop a culture that is structured by making informed decisions driven by data
Allow you to create ad copy tests that have a higher probability of being successful