Almost every team I work with has a vague sense that their paid-search ROAS isn't quite telling them the truth. They just don't have a way of putting a number on the gap between reported performance and real incremental performance. The good news is that you don't need to burn through a quarter of revenue to find out, you just need to set the test up carefully and resist the urge to declare a result early.
Why platform-reported ROAS is broken
Google Ads attributes a conversion to the click that preceded it. That definition implicitly assumes the conversion would not have happened without the click. For brand keywords, that assumption is false a lot of the time. The user typed your brand name because they already wanted you, and the click was a formality, not a cause. For non-brand, the truth is somewhere between 20% and 60%, depending on the query.
Reported ROAS therefore overstates true incremental ROAS by a factor that nobody on your team can name. Incrementality testing is the only way to put a number on it.
The geo-holdout method (the cleanest)
Pick a list of regions matched on traffic volume and conversion rate. Pause your test campaign in half of them, leave it on in the other half, hold for four to six weeks, then compare conversion volume in the holdout group to a forecast based on pre-test trend. The lift is your incremental contribution.
Sample size is usually where these tests fall over. For B2B with ~50 conversions a week, you usually need 6 weeks across 8-12 regions per arm to detect a 20% lift with confidence. Below that you are reading noise.
The brand-search test (essential, painful)
If you have never run an incrementality test on your brand keyword, do this one first. The result is almost always uncomfortable. Pause brand-search in two or three regions for two weeks. Watch what happens to organic-branded traffic and direct traffic in those regions vs the control regions.
If organic + direct rises by roughly the volume of paid-brand clicks you removed, your brand-search incremental ROAS is close to zero. Most large brands find this. Some keep paying anyway because of competitor bidders, that is a defensible reason. "It looks high in the dashboard" is not.
What to test, in what order
Brand search first, because that is where reported and incremental ROAS tend to diverge most dramatically. Then your top-spending non-brand campaign group. Then YouTube view-through conversions, which platform reporting massively overstates. Save MMM and channel-level mix tests for last; they are slower and noisier.
Pitfalls that ruin tests
Three things sink most incrementality tests. Seasonality contamination: running a brand-pause across Black Friday is meaningless. Cross-channel spillover: if you cut Google search but Meta is showing the same audience, you are not isolating the variable. Insufficient duration: two-week tests almost always under-detect lift, because longer purchase cycles mean late conversions land outside the window. Plan for six weeks; settle for four.
What to do with the answer
Build an "incrementality multiplier" per channel: the ratio of true lift to platform-reported conversions. Apply it to your reported ROAS in the budget meeting. Update it quarterly. Now your numbers reflect reality, and your reallocation decisions will start moving the actual P&L instead of the dashboard.
Working on something similar?
I work with B2B SaaS, FinTech and consumer brands across EMEA on performance marketing strategy, attribution and ABM. Always happy to compare notes, two client spots free this quarter if it goes further.
Get in touch →