In this piece we’re going to address attribution models in social media. This will mostly be focussed on Facebook (and Instagram) as this is where the bulk of social advertising spend goes, but the principles are applicable to other networks like Twitter, YouTube or LinkedIn.
Facebook in particular, with its range of advertising formats presents an interesting challenge when it comes to attribution. How do you compare performance for a lower funnel dynamic product ad to a longer video format?
Let’s start by setting out some principles of attribution models.
1. Give insight into performance and allow you to optimise ongoing campaigns
This is fairly straightforward - if an attribution model doesn’t give you answers that you can use to make a campaign better, then it’s not particularly useful. This doesn’t mean that you have to use the shortest attribution window possible - far from it. For a simple remarketing campaign, this may mean using short, click only interactions, but for more brand driven campaigns you will likely need to capture a wider view. Indeed, if you are running a campaign that you believe has an extended window of value (captured in your attribution window) but the length of the campaign is shorter, it probably means you are not running the campaign for long enough to extract value out of it.
You do, however, want to be able to use your view of performance to be able to improve creative, messaging, targeting and so forth.
2. Allow for comparison of one campaign to others
The other part of media optimisation is ensuring that your media spend goes to the channels that are performing best. A large scale campaign will typically have multiple executions, with spend assigned in a way based on your judgement of what you expect to happen. As your customers interact with your execution and you acquire more real world data you will look to tweak this distribution or even make wholesale changes if necessary. To do this you need to be able to compare one channel to another. Being as consistent as possible in your measurement and attribution models from channel to channel is a big help in this. It’s not absolutely necessary - there are ways to create equivalencies, as we’ll see below, but it does remove a lot of unnecessary complication and removes one source of inaccuracy.
3. Be comprehensible enough to answer questions about media performance rather than raise more
The concept of attribution is pretty intuitive to most people who work in business - that the value of an interaction may take a period of time, that effectiveness may increase as more advertising is consumed and that it may decrease as time passes since the last impact. However, where you run into difficulties is when you start to make the explanation of the calculations of this too difficult. The idea of using longer attribution models than plain last click is to give a better, clearer view of the world - this is fundamentally undermined if this view is too complex and your ability to make decisions is hampered by a lack of trust. The most accurate measurement in the world is pointless if no-one believes it.
Attribution does two things - takes measurables and assigns the values based on calculations. For the more complex versions, like Google’s data-driven attribution, this involves statistical sampling methods to fit all possible interactions into a calculated model.
Most, however, will calculate value simply by extending or shrinking the period over which measurement can happen. Looking at Facebook, for example:
For e-commerce, Facebook has two main interaction types - views and clicks (ignoring the minute likes). Views break down when we start to consider video, but mostly Facebook is trying to show you two things - what happens after people click on your ad and what happens after people have seen your ad. Because Facebook’s tracking is able to cover multiple devices as long as customers are logged in to one of their products, the coverage of this is very effective. This is in contrast to Google Analytics, which typically only covers click interactions. For this reason, Facebook’s view of value is generally a lot higher than what most teams around the business will be used to seeing.
Depending on how you have tagged up your website, Facebook can measure anything from site visits, product page views, product ads to cart down to transaction and revenue.
Exactly what outcome you want to measure will depend on the goal of your campaign, whether it’s sign ups, product sales or site visits. This applies equally to the interaction measurements. It’s generally better to err on the side of a longer view so as to capture a fuller view of value created, both for view through and click through. Facebook’s defaults are 28 days for click through and 1 day for view-through, which seems like a hedge on their part not to appear to be overstating value created by advertising.
Facebook being able to capture every conversion is very useful, but it does create a blindspot - the measurement has no awareness of other channels. As such, when you look at the outcomes in the Facebook ad interface, it will assign all possible value to interactions with a Facebook ad. Facebook, of course, is not alone in this - In fact, if you add up all of the value created using the internal attribution of all of your channels, you will get to a number much higher than your actual income.
While using in platform reporting is fine for ongoing optimisation, You need to do some work in weighting these values for your own internal reporting. Some platforms or bid management software will do this for you, such as Oracle Analytics Cloud, or Marin, but these only really make sense financially if you are investing a lot in media. Fortunately, there are simpler tools you can use internally.
What we are really trying to measure is how much extra value each channel creates. Recently Facebook (and AdWords for YouTube and Display) has improved the access to a key tool in this regard - the holdout test. Previously you had to spend upwards of £250,000 in a month and have the ear of the right account manager to be able to use this, but now you can run one on campaigns that have as few as 100 conversion events (you get to choose three variations).
This works by holding back your ads from a randomised proportion of your chosen audience and calculating the difference in value between those who did see the ad and those who didn’t. This gives you a conversion factor that allows you to scale the numbers that are shown in the interface to the actual value being created. So for example, if you have a campaign that is set to 28 day windows for view through and click through, which Facebook is showing generated £1000 in revenue, but previous tests have shown an uplift value that is 10% of this, then you can scale this down to £100 in revenue.
It’s worth remembering that both the holdback value and the value reported in the main interface will increase over time, so it’s worth running multiple tests to establish a ratio that you are comfortable using and then use this going forward. Using an active holdout method on live campaigns before they have finished will sometimes not give you an accurate view of performance, so it’s better to use historical data in this way.
This type of tactical modelling and measurement is key for all of the practical uses set out at the beginning, but it’s also important to perform periodic checks to ensure that what you are using is helping. One very simple way of doing this is to ask yourself whether what your models are telling you lines up with what you’re seeing from business trading as a whole. For instance, if everything is telling you that you are becoming more efficient / creating more value but revenue growth is static overall, then there is a gap somewhere that needs to be investigated - it may mean that your choices in what you are optimising towards are incorrect.
A more rigorous way is to use econometrics or marketing mix modelling - essentially, a set of tools that allows you to run a regression analysis on all of your channels to create a view of what your most effective channels and mix is. This can be expensive and time consuming and is only really worth doing if you are investing heavily in multiple channels.
Be as consistent as possible in the methods you use across channels, but acknowledge the differences where they matter (eg, Facebook has a 28 day window but AdWords 30, which probably won’t make much of a material difference)
Using testing to create an internal working model that puts all channels on a level playing field
Set campaign KPIs by channel and use channel platforms for in-flight optimisation
Using macro testing to check your view of performance