How confident are you in your advertisement?
Whether you want your ad to increase sales or raise awareness of your brand, you need one that you know will work. There’s no better way to be confident about an ad than to see that it’s tested well.
Ad testing is the process of putting different ads in front of a sample of your target audience and asking for feedback on them. You can run ad tests on an entire ad or specific aspects of it, and collect feedback on anything from how much the ad stands out to how believable they find it.
Throughout this page we’ll talk about how to perform ad testing and the best practices when performing it, but first, let’s review why it’s so important.
The amount customers spend on ads is astronomically high (more than $500 billion dollars are spent on ads worldwide), and it’s only growing.
Why are brands spending so aggressively on ads? Because they’re effective. For example, consumer packaged goods (CPG) brands see a solid return on their ads across media types. When you incorporate pre-launch testing to home in on particular ad concepts, the chances of a strong return only grows.
Measuring advertising effectiveness through testing offers 4 additional benefits:
Now that you know why assessing advertising effectiveness is so important, you’re ready to dive into the process of testing your ads. Here are the 4 key steps you’ll need to follow:
Your stimuli can take the form of videos, images, copy, audio, or a combination of these elements. As you decide on the formats your stimuli take, also consider their content based on the places they’ll be used. For instance, you might want to tailor Linkedin ads to professional networkers, as opposed to Facebook ads or Google search ads.
Your team should try to use at least 3 stimuli per test to get a fair assessment of your target audience’s preferences and opinions. But the number of stimuli you choose also depends on how you want to present them. In other words, if you plan to use a monadic survey design or a sequential monadic survey design.
A monadic survey design is when you ask each respondent for feedback on a single stimuli. Once all your responses come back, you’d then compare them across your stimuli to pick the winning concept.
This survey design allows you to ask more questions on each stimuli. It’s also more likely to result in a relatively short questionnaire, which benefits your survey’s completion rate and prevents respondents from racing through your survey. However, since you’re only showing each respondent one stimuli, you’ll need to target a larger audience. This can prove costly and it might not be feasible.
A sequential monadic survey design is when you present each respondent with 2 or more stimuli and ask the same questions on each. Once you’ve collected your responses, you can directly compare your stimuli using a single survey.
This type of design lets you target a relatively smaller audience than a monadic design, which makes it more cost-effective and feasible. However, if you want to keep your survey to a manageable length, you might not be able to ask as many questions on each stimuli.
Bottom line: If you’d prefer to use a monadic design, you should probably stick to testing only a few stimuli; and If you want to use a sequential monadic design, you should feel comfortable testing more.
What makes one ad better than another? The metrics you measure can help you decide.
Here are the top ad metrics to consider:
The value of each metric depends on your situation. Say the top goal for your ad is to influence sales. Purchase intent may then be the most important metric. However, if the focus is on brand differentiation, uniqueness may be more of a priority.
You can measure any metric in your ad testing survey using a Likert scale. It can follow the formula, “How (metric) is the ad?” where your answer choices range from “Extremely (metric)” to “Not at all (metric).”
For example, here’s how the question can look if our metric is believable:
How believable is the ad?
Your ad testing survey should also ask screener, category, and demographic questions. Learn about each of them in our most comprehensive resource for concept testing.
Ultimate guide to running market research: This resource has everything you need to run market research, from planning your study to taking action!
Ad testing survey template: This expert-certified survey template can help you brainstorm your questions. You can also use it and edit it however you’d like.
SurveyMonkey Audience: Our global consumer panel allows you to survey people in more than 130 countries.
You can only determine the quality of your ads once your target audience evaluates them. Find your target profile of individuals and give them an opportunity to provide feedback in either one of 2 ways:
Once you’ve collected responses, you’re ready to compare the ads against each other. To help you focus on the data you care about most and to make your analysis more straightforward, we recommend you use Top 2 Box scores. This method of analysis combines the 2 most positive answer choices for each metric and turns them into a single percentage. For instance, if 30% of respondents said an ad was extremely believable and 20% said it was very believable, your Top 2 Box score would be 50%.
You can then take an ad’s Top 2 Box scores across every metric and compare it to the other ads’.
In the example above, we calculated the Top 2 Box scores for 2 ads across 3 metrics. Ad B wins on purchase intent and appeal; while Ad A wins on believability. Assuming you consider each metric equally important, you should pick Ad B over Ad A.
Don’t forget to look at the responses from your open-ended questions (those that don’t include answer choices). Our word cloud can help you quickly spot the key takeaways on each ad.
We’ve only scratched the surface on running a concept test with ads. To get a more comprehensive understanding of each step, read “The ultimate guide to concept testing.”
Alternatively, you can try in-market testing. This method pushes your concepts live and measures their performance through A/B testing (where a certain percent of people see each ad). In-market testing is valuable because it tells you how people actually respond to the ads. But it can be a more risky and costly approach, as some of your concepts may perform poorly. It’s a good idea to use both methods: Test ads before they go live and then A/B test the shortlist of winners to see how each actually performs.
Begin ad testing once your product gets introduced. At this point, since everything about your ads will be brand new to your audience, you’ll want to look for broad, directional feedback. Home in on the messages that resonate the most, and move forward from there.
As your product develops, your testing should become increasingly nuanced. For instance, when your product is in the maturity phase, you might be testing different image concepts for the ad. Then, when your product reaches its saturation phase, you might have it so dialed in that you’re just testing the colors of the image you plan to use.
The idea of running several ad tests throughout the product life cycle is rooted in agile market research—a research methodology that involves frequent rounds of data collection to account for an organization’s needs over time. It offers a manageable process for running research and it empowers your team to make better decisions more often.
Here are 5 things to stay on top of as you begin testing:
Well done! You’ve learned why ad testing is important, discovered how to run your own test, and even got some tricks under your belt to ensure that your future tests run smoothly. Now you can rest easy knowing that the days of launching high-risk ads are over.