This week, we’re celebrating the release of Text Ad Zoom with a 5 part Q&A series featuring the authors who wrote the articles highlighted in The Ultimate List of PPC Ad Testing Resources. Each day this week, we’ll have a different question and the answers. Yesterday, we asked “What are the biggest text ad testing mistakes?“. Read below for today’s question and answers (in no particular order). If you want to see all of the great ClickEquations features in action? Request a demo
Text Ad Optimization Q&A #2: How Do You Pick Which Ads To Test First?
Brad Geddes: I like to start with completely different ads at first. One might have a price, another DKI, another a strong call to action, and another one based around benefits, etc. Then once I find what type of ad works well for that keyword or buying cycle component, then I’ll move to testing more incremental changes based upon the winning ads. Andrew Goodman: Every ad group should start with an attempt to nail the correct fit and tone for the imagined prospect, and it should be rotated with 2-3 additional (alternate theory) ads to send you signals as to whether your approach is working. What’s first as far as your “attempt to nail it”? I like to use something I call a “plain ad”. Write the most concise, clear headline possible and convey cues about positioning (quality, speed, shipping, etc.) in the body copy. Consider adding your company’s USP’s if you’ve already brainstormed them Jessica Niver: If I run into time constraints (can you imagine?), I focus my energy on: high CPL-high conversion ad groups, high conversion, high-competition ad groups regardless of CPL. For ecommerce clients, any ad groups with multiple sale offers that change frequently or that can be tested against one another. I’d also keep a list of ad groups that have a high seasonal/holiday bias and make sure those are focused on at the right time of year/month as well. Also low-CTR, low-quality score ad groups though those often need work on keyword-ad relevancy more than just ad text testing. Because it’s testing, the ads you add won’t always improve performance immediately. Maybe they suck and you shouldn’t use that messaging and that’s what the test shows you. So in spite of the above I try not to test in all of my high-lead or high-revenue ad groups simultaneously to maintain a performance safety zone so I don’t completely damage my clients’ shorter-term performance if something goes unexpectedly. Chad Summerhill: I start with the high-volume ad groups first. Any ad groups that are performing well below the campaign’s average performance (CTR, CR, PPI). Amy Hoffman: I generally select ads that I think will perform the best. Knowing the account helps in selecting ads to test and I generally have a good idea about which ads will work best. I take into account the number of keywords in the ads, the search volume of the keywords in the ads, the quality score of the keywords in the ad, and the relevance of the ad to the landing page. Erin Sellnow: For regular testing, I tend to focus on my underperforming ad groups first. Ones with a low CTR or quality score, as I need to improve their performance in order to better the entire account. If I am looking to do some general experimenting though, I will look at my high traffic ad groups first, so I can get baseline results quickly. From there I can tweak the test with other ad groups, but at least I know if the general idea if going to work or not without waiting for months to get results. Pete Hall: Usually I’ll start with a tried and true CTA that the client uses for other marketing efforts and then build off that. Zappos is known for their great customer service. Others pride themselves on free shipping. Ease of use. Affordable, and so on. That’s a great way to start. If there’s some big-time awards or accolades that the client has received, i.e. “Product of the year”, that’s a great starting point as well. Ryan Healy: The easiest way to decide is to simply pick the ad you think is most persuasive and test it first. Then test the next most persuasive ad, and so forth. If you have three ads you want to test, there is no scientific process that will tell you in advance which ad will perform best. So you just have to trust your gut and start testing. Jeff Sexton: Well, there are multiple schools of thought on this. Obviously if the rest of your account management is messed up, you may want to fix that first, or to test those ads which have the relatively soundest ad groups and bid management, as you don’t want to watch your hard work become invalidated after a major account reorganization. Similarly, you’d also want to start where the landing pages have been optimized or have proven to be good performers. Although your PPC testing can give you insights that will help you with your landing page (and vice verse), it always helps to test PPC Ads for a landing page that’s already converting well. But assuming that your ad groups, Ad Words management, and landing pages are all up to snuff, you’d probably want to focus on those Ads that are responsible for the bulk of your profit. Start where improvements will make the maximum difference. Tom Demers:
- Cost – Which groups are spending the largest amount? These are the areas where testing and even small percentage growth in areas like conversion and click-through rate on your ads can have a large impact.
- Opportunity for Improvement – Larger groups that have indicators of problem ads like low CTRs, low Quality Scores across the board, or low conversion rates can be good candidates for optimization. Another good thing to look at here are “internal benchmarks” or peer calculations.
- Time between test – Another thing we’ve found has been a great indication that ad copy can be working harder is when it’s been months (or years) between tests. There are really an infinite number of variations and approaches you can take to testing an ad, so a stale ad almost always offers a great opportunity to find a variation that will resonate better with prospects.
Crosby Grant: I have a two part answer: 1) where to start, and 2) what to start with. 1.) Where to start: I try to always start testing in the AdGroup most likely to yield the biggest improvement in the goals I am trying to meet. Then I move on to the next when the expected return on time spent on the current one is less than the expected return on time spent in the next one. Most often, that is the Ad Group with the most traffic because even small changes there will produce relatively large results in your metrics. It might also be the Ad Group with the least-optimized ads, because it should be easy to get big improvements there. 2.) What to start with: That sort of depends. Early in an optimization cycle I try to start with the most diverse set of ads I can, because I don’t know yet which ones will lead to the gains I am looking for. In a more mature testing routine, we are probably down to trying to refine subtleties and looking to squeeze that last bit of CTR or margin, or whatever we are seeking to maximize. Rob Boyd: My decision is going to be based on the principle of doing what will have the highest impact first. Generally the first place I’m going to look is in the highest spend campaign or ad group. This isn’t always the case however. For example, the high spend campaign might already be performing within desired goal metrics, which might sway my decision to look at a campaign that is outside of goal metrics but one that I feel has great potential. The argument could be made that improving the campaign that is already within goal metrics could have a greater impact, based on the spend level alone, but attacking the lower performing campaigns or ad groups one-by-one could collectively add up to a greater impact and a more well rounded account. . Greg Meyers: First of all, the Text Ad Test should not be a “one and done” thing. It requires multiple levels of testing. Depending on the situation, I would suggest taking an existing Text Ad that already has conversions and decent CTR% in it’s history and use that as a starting point. The reason, is that I want to make sure that there is potential for success “after the click” as CTR% should not always be the deciding factor. Bonnie Schwartz: When I start off I like to test two completely different description lines and keep the headline constant. This somewhat contradicts statement A above, but I find sometimes that by just changing little things off the bat, it makes it hard to achieve real finding. As such, I go for very different messaging in the first test to find a strong ad overall. Once I get messaging that works I tweak from there and change one variable at a time. John Lee: The ads that are generating the best combination of CVR, CTR and ROI are the ones that I test first. These text ads are frequently the highest volume ads, too, which speeds up testing. Jon Rognerud: Start with the end in mind. Ask this: what is the goal or objective you are trying to reach? Then speak to that, write that. And, the word “consistency” comes to mind. You should test ads (first) that match up the closest to your landing page content, message and offer. Write different versions that speak to the same page and test those first. Joe Kerschbaum: Test the ideas that you think will win. Then continue on that path. Test with bold ideas. Swing for the fences. Learn More About The Authors
- Brad Geddes – Certified Knowledge
- Andrew Goodman – PageZero
- Jessica Niver – Hanapin Marketing
- Chad Summerhill – PPC Prospector
- Amy Hoffman – Hanapin Marketing
- Erin Sellnow – Hanapin Marketing
- Pete Hall – Room 214, a social media agency
- Ryan Healy – BoostCTR / RyanHealy.com
- Jeff Sexton – BoostCTR / JeffSextonWrites.com
- Tom Demers – BoostCTR / MeasuredSEM
- Bradd Libby – The Search Agents
- Crosby Grant – Stone Temple Consulting
- Rob Boyd – Hanapin Marketing
- Greg Meyers – SEMGeek / iGesso
- Bonnie Schwartz – SEER Interactive
- John Lee – Clix Marketing
- Jon Rognerud – JonRognerud.com
- Joe Kerschbaum – Clix Marketing