If you or your agency are split testing responsive search ads and comparing them to your expanded text ads (ETAs) based on an increases or decreases in CTR or conversion rate, you’re not split testing responsive search ads correctly.
In fact when you’re split testing responsive search ads CTR and conversion rate don’t really matter.
At this point, as an agency that talks a lot about scientific split testing, we’ll assume that you think I’ve lost my mind.
But hear me out.
Before we dig into how you should test responsive search ads correctly, here is a bit of background on what they are.
The responsive search ad format, not to be confused with dynamic search ads, allow you to show up to three headlines and two descriptions alongside your ads, allowing you to take up more space in Google Search.
It also encompases an element of machine learning.
You provide Google with a number of different headlines and descriptions and Google tests different combinations to work out which ones should appear for each search to provide the best results.
Believe it or not responsive search ads have now been around for around a year and were introduced in early May 2018.
How should you split test responsive search ads?
Split testing responsive search ads is complex. To be clear, we are talking about split testing expanded text ads against responsive search ads to see which ad format performs the best.
As opposed to split testing responsive search ads against other responsive search ads.
Scientific split testing required that you keep the conditions of the split test consistent to ensure a fair test
When testing ETAs vs responsive search ads this is where issues start to occur as they are not always shown to the same audience.
Google’s Matt Lawson says “There are all sorts of instances where you might end up serving [responsive search ads] impressions in a low CTR placement that you would never have qualified for before.”
So testing a responsive search ad vs an ETA in an A/B test would be a flawed test as responsive search ads have access to inventory that ETAs don’t.
As Frederick Valaeys points out. This is inventory that has always been there, but the combination of headlines and descriptions within your ETAs had resulted in a lower Ad Rank for your ads to be shown.
Ad rank is Google’s formula to determine if your ads will show and is made up of your CPC bid and Quality Score.
Responsive search ads may now show for placements that they previously couldn’t as Google’s machine learning algorithms have been able to create a combination of headlines and descriptions that would be more relevant to the search term than your ETA.
This results in a higher CTR and Ad relevance, two of the main components of Quality Score and therefore Ad Rank.
This is a real game changer as it means that you need to split test responsive search ads vs ETAs differently to a conventional split test.
Conventional methodology for a split test would dictate that you create two ad tests. A control and a test ad within a single keyword ad group.
You then run the test under the same conditions until you have enough data to determine that one advert performs better than the other based on it having a higher CTR or conversion rate.
That is, in essence, as close to a perfect ad test as you can get within Google Ads considering that there are many variables that change naturally, such as your competitors in the search rankings and your position within them for example.
When split testing responsive search ads you need to take a more holistic top down view and look at the bigger picture, stop focusing on CTR and conversion rate as your split testing metrics.
You should focus on CPA and conversion volume as opposed to CTR and conversion rate when split testing responsive search ads.
The idea that conversion rate doesn’t matter and you should not be interested in the percentage of people who visit your website may seem far fetched, however as Andy Taylor pointed out.
“any conversion rate can be acceptable, so long as the price paid for the clicks results in an effective cost per conversion”.
Giving the example that the search partner network converts at a lower rate than the search network but the clicks are cheaper so the overall CPA can be comparable.
We would suggest applying this thinking towards responsive search ads.
Where you may want to consider that some of the new inventory that you’re appearing for may have a lower conversion rate, but provided that the CPCs are lower, the CPA and conversion volume that drive business results can still be effective.
This same thinking applies to CTR, it is possible that when you start to appear on inventory that you were previously not appearing for, you might have terms that have a lower CTR but drive more impressions and clicks.
Overall we would suggest that you compare the number of conversions that they generated as well as the cost per conversion vs the ETAs within the campaign to take into account new inventory that the ads were previously not appearing for.
When split testing responsive search ads against ETAs, it’s important to look holistically and look at conventional split testing metrics such as CTR and conversion rate more loosely.
We would suggest that you use CPA and conversion volume as your metrics for your split test to account for the fact that responsive search ads are likely to appear on different inventory (search terms) than your ETAs.