Brad Geddes / PPC Geek
Official Google Ads Seminar Leader.
Author of Advanced Google AdWords.
Co-Founder, Adalysis.
(703) 828-5811‬
Brad Geddes's Theories on Marketing The Complete AdWords Audit Part 10: Testing

The Complete AdWords Audit Part 10: Testing

This is a continuation of the AdWords Audit Series. You can see previous parts here: Introduction, Goal setting, Measurement, Campaign Settings & Bid Adjustments, Ad Extensions, Impression Share & Auction Insights, Quality Score, Account Structure, Keywords & Match Types and Ad Copy.

 “Always Be Testing”

Any serious online marketer

One of the greatest things in online marketing is that you can test anything that generates a click or conversion (which is about everything): keywords, bids, ads, landing pages, your checkout process, etc.

Besides, it’s actually the only way to scientifically keep improving the performance of your account and website. That’s why the most successful advertisers and websites in the world have 1 thing in common: they’re testing everything in an almost obsessive manner, running hundreds of tests each month.

Depending on the amount of traffic you have, you may not need to run hundreds of tests a month, but you need to be continuously testing, as often as you possibly can with the traffic you do have. How else will you find out what works best?

To run a test, you basically need 3 things:

  1. The fun part: generating an idea to test. Ideas could come from anywhere: while you’re taking a shower, something in an ad or website you saw, something the HiPPO suggested or something to actually challenge the HiPPO. Wherever the idea may come from, the best response is always: “let’s test it”. In the end, whatever generates the most profit should prevail.
  2. The work part: actually implementing the test: adding a keyword, writing a new ad, creating a new version of the landing page, etc.
  3. The tedious part: waiting and testing for statistical significance. But luckily there are tools out there to do this work for you, which you’ll find at the bottom of this post.

So what do I mean by testing? Well, there are actually 2 types of testing you could do within an AdWords account and both never stop:

  • Split testing (A/B testing): running a controlled experiment with 2 or more variants (e.g. 2 ads in an ad group) where traffic is split between these variants. After a while (depending on the relative difference in performance and the amount of traffic), you’ll find that one of the variants significantly outperforms the other(s). The winner gets to stay and should be challenged by a new variant as soon as possible.
  • Trying out new elements, like adding new keywords, campaign types, networks, etc. This is part of the continuous search for new opportunities: some will work and some won’t, but if you don’t try it, you will never know. In this case there’s nothing to split test in the traditional sense, as you’re adding something new, but obviously you will find out if this addition is worthwhile.

In this post I’ll focus on testing within PPC. Landing page best practices (including how to test them) will be the subject of a future post in this series.

Split Testing Ads

If there’s just one element in your account you should continuously be testing, it’s your ads. They highly influence your CTR and therefore your quality score and the amount of traffic you’re getting from paid search. CTR is also the metric you’ll reach statistical significance the fastest, so you’ll quickly learn what works and what doesn’t. Applying all these learnings add up to huge improvements in your account over time.
Ads also influence conversion rate, but it will take longer to reach significance for this metric, as you’ll obviously have much less conversions than clicks. But if you do reach significance for conversion rate, you should definitely take this into account and test for the ultimate metric: profit per impression.

If you need inspiration on what you could be testing in ads, check the ‘Miscellaneous Ad Copy Tips’ at the bottom of the Ad Copy part of this series, or take a look at the highly useful testing framework in An Endless Supply of AdWords Ads for your Split-Test Experiments, check out the infographic with 26 ideas for split testing your search ads and be sure to follow the Ad Text Tips from the Boost Blog (especially their ‘Win of the Week’ posts).

When testing ads, keep these best practices in mind:

  • Set your ad rotation settings to rotate indefinitely. This isn’t the default campaign setting, so you’ll have to change this yourself. The reason you’ll want to rotate is twofold:
    • You want to choose the winning ad based on your metrics and your significance requirements, not Google’s. Even if you would purely test for CTR or conversions, Google doesn’t show you the confidence levels and just starts showing the ‘best performing’ ad more and more often.
    • You want to be notified whenever there’s a winner. That way, you can directly pause the loser and write a new ad. Even if Google’s optimize settings would always pick the right winner for you, you won’t know it as long as you don’t go into each ad group and find out that one ad has a much higher ad-served percentage. That’s something you don’t want to spend time on anyway, and that’s where the paid tools listed below come in.
  • Determine the goal of your test. In most cases the purpose should be to create an ad with a higher profit per impression, without hurting quality score (too much). But maybe you have a low quality score (or don’t want to wait for significant conversion differences) and you only care about increasing CTR for the moment.
    Or maybe you have too many low quality clicks and you’re willing to purposefully lower CTR by qualifying, for the sake of improving conversion rate and saving costs.
  • Test 2 to 4 ads (per device) simultaneously. You’ll get the fastest results when testing just 2 ads. But in high volume ad groups you could also test 3 or 4 ads at the same time. If you advertise on mobile devices and have mobile preferred ads, then you should also test 2 to 4 mobile preferred ads and consider this as a separate test.
  • Test one thing at a time. Whether they’re big or small changes, once the winner has been declared, you should be able to clearly pinpoint why that specific ad performed better. That’s impossible if you change multiple things at once. A simple way of testing one thing at a time is by testing 1 line of the ad copy at once and keeping the other lines the same.
  • Use aggregate testing. Especially for low volume ad groups, it may take forever before you reach statistical significance. And you don’t want to wait forever. A way to solve for this, is to use the same text (usually one or both description lines) in multiple ad groups and aggregate the data at these lines (or words) to see what works best.
    An easy way to aggregate is to label all ads that share components, so you can analyze the performance on a label level (under the Dimensions tab). Another way is to create a Pivot Table in Excel to aggregate.
    But beware: when you aggregate data, make sure the ad groups are as similar as possible. So don’t aggregate branded and non-branded, search and display, ads in high positions and ads in low positions, etc.
    If you do this, you can run into Simpson’s Paradox, leading you to draw the wrong conclusions based on aggregate statistics.
  • Make sure the results are accurate and significant. This means the ads should have been active simultaneously, conversion tracking is in place, the ads have been in the same (or very similar) positions and obviously: both ads have had sufficient clicks (and conversions) for the differences to be significant. The tools at the bottom of this post will help you determine whether the differences are significant. You should aim for a 95% confidence level or higher.
    So you don’t want to draw conclusions too soon, but on the other hand, you don’t want to keep running significantly underperforming ads. So make sure you’ll know right away whenever an ad is underperforming.
  • Keep track of your learnings. Before you know it, you’ll have tested dozens if not hundreds of ideas. So make sure you (and your colleagues) don’t forget what has worked and what hasn’t by logging your test results. This prevents you from testing something you’ve already tested in the past (although sometimes, this is actually worth trying) and helps you generate truly new ideas.

Split Testing Keywords, Bids, Ad Group Structure and Match Types

It’s probably one of the least used features within AdWords, but you’re actually able to split test all these elements (and others) with AdWords Campaign Experiments (ACE). I won’t dive into how to set up experiments or how to monitor them as Google provides clear guides and some useful examples can be found in these 1 minute videos:

  • Keywords & match types: test the impact of adding a new keyword or different match type.
  • Bids: test the impact of changing your bids. This is especially useful if you’re looking for the optimal combination of efficiency and volume for a keyword or ad group. Usually a higher bid should lead to a higher CPA or a lower ROAS, but it should also lead to more conversions or revenue. By running this test, you can actually see the efficiency and volume combinations for different bids and choose the one that delivers you the most profit.
  • Ad Group structure: although you can be pretty sure that a tightly themed ad group should outperform a less specific ad group, you can actually test this to be up to 99.9% certain that this is not due to chance.
  • Ad Creatives: this option doesn’t add that much to regular split testing without ACE as described above, besides seeing confidence levels in the AdWords interface and easily aggregating the results at the campaign level.

There’s another reason I won’t go too deep into ACE, and that’s because Google announced to take split testing to a whole new level with Drafts and Experiments. This means a more sophisticated, user-friendly and powerful way of split testing compared to ACE.
And yes, this means you can even split test campaign settings, like whether or not you should use automated bidding. That’s fantastic news for all of us.
The bad news is that we’ll have to wait until somewhere in the first half of 2015 before this feature will be released. Until then: try out the possibilities with ACE, the results may surprise you!

Testing New Opportunities

Next to scientific testing with hypotheses, controls, experiments and confidence levels, there’s also the good old “let’s try something new”.

So what kind of things could you add or change in your account?

Suggestions from the Opportunities tab

These have become a lot more useful than they used to be. You will even find suggestions to lower your bid for locations to improve performance. But you’ll also find the suggestion to set your rotation setting back to optimize for clicks or conversions. As discussed above, you don’t want to do this.

So make sure to regularly take a look at the Opportunities tab, and use your common sense to decide which one(s) to apply.

Dynamic Search Ads

If you haven’t tried Dynamic Search Ads (DSA) yet, you definitely should, especially if you sell a lot of different products (but even if you don’t).
Any skepticism is understandable, as you’re letting Google decide which search queries to target based on the content on (parts of) your website. That’s why you always need to keep a very close eye on DSA campaigns.
But for mature accounts, it gets harder and harder to come up with new keywords. So next to traditional keyword research with the tools listed at the bottom of the Keywords and Match Types part of this series, DSA can help you find additional keywords and increase your reach with much less effort (but more risk).

Once you know how to create a DSA campaign, keep these best practices and ideas in mind:

  • Create a separate campaign for Dynamic Search Ads.
  • Start with a daily budget you can afford to test with.
  • Initially, set your max CPC’s lower than most (if not all) CPC’s in the rest of your account. You can always increase bids later if you’re happy about the first results or want more volume from your DSA campaign.
  • Add negative keywords beforehand: words you don’t want to be matched on anywhere and keywords that are already active in other campaigns (especially your brand name, including misspellings).
  • Review the search terms report often (weekly at least) and add irrelevant or underperforming queries as negatives. And on the other hand:
  • Promote search queries with 2 or more conversions to their own ad group in a regular campaign (and add them as a negative afterwards).
  • Test and optimize the description lines of your ad copy (you can’t control the headlines).
  • Refine you campaign with different targets per ad group.
  • Don’t forget to add sitelinks and other extensions to your DSA campaign.
  • Combine DSA with RLSA to create a separate RDSA campaign. That way, you’ll only target your site visitors, which makes DSA much less risky. You could start with this if you have a lot of traffic on your site and prefer to start small.

AdWords Betas

Once you (or your agency) spend a significant budget on AdWords, you may have a Google rep and he or she can keep you up-to-date on the latest product developments and (upcoming) betas.
In that case, participate in any beta you can and that makes sense to your business, as it can give you a unique (but temporary) advantage compared to advertisers that aren’t participating in the beta.

As I’ve started this post with the well-known “always be testing” mantra, let’s wrap up with the following very appropriate Google mantra:


As mentioned earlier in this post, there are several tools to help you out with testing, especially with the tedious work of checking for statistical significance.

Free Tools

  • Cardinal Path PPC Ad Testing Tool: if you want to check for CTR and conversion rate at the same time, use this tool.
  • Chad Summerhill’s Excel Spreadsheets: Chad created 2 great Excel downloads to quickly calculate the significance for difference confidence levels when it comes to CTR, conversion rate and conversions per impression. All you have to do is share the page(s) on Twitter or Google+: file 1 including sample size calculation and file 2 with explanation within the spreadsheet (but without needed sample size).
  • Note: Chad’s site and are offline
  • Use this tool if you quickly want to find out the significance of CTR differences between 2 ads.

Paid Tools

  • AdAlysis: my favorite ad testing tool, fully dedicated to make your ad testing life easier. No more manual checking and waiting for significance, as AdAlysis will automatically show where tests have reached significance. Scroll down the AdAlysis homepage to get an idea of the features or watch the instructional videos.
  • AdBasis: if you want to take your automated ad testing to the next level, AdBasis offers a multivariate solution that calculates which ad combination is best for each conversion goal.
  • AdProof: testing is great, but it can still be an expensive way to learn. What if you could use crowdsourcing to test your ads instead of paying Google for clicks? That’s what the AdProof platform offers: cheaper and faster test results.
  • Hero Pro: Ad Automator & Split Test Champion, both these tools of the Hero Pro suite will help you in your ad copy testing.
  • Optmyzr A/B Testing for Ads: one of the Optmyzr tools that makes ad testing easier by doing the math for you.


Testing: Your Audit Checklist


Is the ad rotation setting set to rotate indefinitely for all your campaigns?
checkboxDo all your ad groups contain 2 to 4 active ads (per device)?
checkboxHave you paused all significantly underperforming ads and replaced them by new challengers?
checkboxDo you use AdWords Campaign Experiments to split test other AdWords elements like keywords, bids and ad groups?
checkboxDo you regularly check the opportunities tab and apply/test the suggestions that make sense?
checkboxHave you tried Dynamic Search Ads (DSA), following the mentioned best practices?
checkboxHave you tried Dynamic Search Ads in combination with Remarketing Lists for Search Ads (RLSA)?
checkboxIn case you have a Google rep: are you participating in any betas that you’re eligible for and that make sense to your business?
checkboxBonus: do you use one of the mentioned paid tools to make ad testing easier and more effective?

This is a guest post by Wijnand Meijer, Quality & Learning Manager at iProspect Netherlands, an online media agency based in Amsterdam. He created his first AdWords campaigns in 2006 and is currently helping advertisers and coworkers alike to get their Paid Search to the next level.

Opinions expressed in the article are those of the guest author and not necessarily bgTheory. If you would like to write for Certified Knowledge, please let us know.

Leave a Reply