Recently, Facebook has established incrementality as a key tool to understand the performance of the platform. Even though running a study can be relatively easy, things might get confusing when it comes to deciding what to do with the newly-found information.
At Precis, we’ve run many lift tests, and here we present some practical insights for those taking their first steps towards working with incrementality data.
After a period of data collection, you’ll find yourself looking at a percentage which represents the conversion rate uplift of your account, campaign, or group of campaigns, as well as two numbers indicating the additional conversions and revenue caused by your ads. These are calculated by analysing conversions completed by the test group compared to the users in the control group.
The numbers are supposed to quantify the impact of your ads in influencing user intent, but what are we going to do with this information? What are the next steps?
Use the data to refine account strategy
The first action that stems from this test is usually the manual reallocation of budget. Facebook’s algorithm does not optimise towards incremental value as standard. This means that the newly-uncovered information should be used to make account strategy decisions. A simple multiplier based on conversion data using outputs from the lift test can be enough to get started.
This is useful in the case of remarketing audiences, which are often the best performers in an account as they already demonstrated intent or at least interest in your product. Looking at incremental results in this case can tell us how much of this performance would’ve happened independently from any interaction with ads. If your remarketing campaigns proved to be less relevant in influencing a user’s choice, then you might want to reduce their budget and redistribute it towards more prospecting audiences.
Similar actions can be taken by controlling the targets. The objective in this case is to allow delivery only when a certain level of efficiency is reached.
Set a Cost Cap based on the incremental value that was measured. Define a target CPA for these audiences and limit delivery based on efficiency. Value-based objectives can follow the same logic when defining a minimum ROAS.
Use a similar strategy, but work it out with Bid Caps. We know the audiences, we know the value, we know how strict we should be with performance expectations.
In both cases, remember to adjust the values for the different conversion windows used for the study and for the account. It also might take some time to find the perfect delivery, but knowing incremental value is a good base to start. At the same time, make sure to use data that is always available and dynamic when optimising day-to-day performance. Lift data might be closer to the truth, but it is collected in a single point in time and carries strong limitations with it.
What are the limits?
An incremental CPA of $60 where you usually see a $12 one can be pretty shocking. Should I use this data to analyse the campaigns’ performance from now on? Should I add $48 when looking at the CPA of my campaigns?
In all types of marketing there are innumerable factors influencing users’ behaviours, many of which are not possible to measure (yes, even for digital marketing solutions). Understanding attribution means knowing that the CPA of $12 you were seeing in Facebook was never supposed to be the truth, but just a piece of a bigger puzzle that we can only get close to solving. Being able to get some data about statistically significant incremental effects of a platform is a great thing, nevertheless even these numbers should not be overestimated nor underestimated for multiple reasons:
The precision in defining and keeping consistency in the test and control groups is strongly dependent on the channel. In the case of Facebook, the fact that tracking is user based and not cookie based allows for higher precision in excluding users from interactions on multiple devices and browsers. However, it won’t be 100% accurate - factors such asITP and ad blockers on specific browsers can affect reliability.
Another limitation is cross-channel behaviour, the fact that users in both the control and the test group might see your ads while browsing on different platforms. Users might interact with very similar creatives in other channels which significantly reduces the effectiveness of a test run on a single channel such as Facebook.
If the first limitation is quite difficult to solve, there are ways to deal with the second one. For example, a cross-channel conversion lift study might do the trick. This is usually built on geo-based holdouts, where users in the control group are excluded from seeing ads on multiple channels using zip codes. A Facebook team can help you set this up, so get in touch and give it a try!