Blog

3 mistakes to avoid in your data analysis

Written by Becky Madden | May 29, 2024 10:23:02 PM

Whether you’re just starting out in marketing and stretching your data muscles, a marketing veteran or a business leader; when analysing data and delving into that rabbit hole, it can be easy to lose perspective. 

Working with raw data is one thing, but gleaning meaningful, actionable insights from it is entirely another. To have reliable insights the data needs to be reviewed within, and compared to the right contexts. 

You might be working with the data yourself, or briefing into a team of data specialists. Either way, it’s important to be able to ask the right questions to ensure you’re working with and interpreting the data effectively.

Arm yourself with some of the basics, by avoiding these 3 mistakes:

 

  1. Focussing too much on last click data


It can become quite addictive to look at which digital channels are bringing in conversions when we’re on the hook for leads or sales targets.
However, by attributing ‘success’ to those channels that were the final interaction from a customer prior to filling in that form or purchasing, we’re missing all the interactions that led to that final decision point, digitally and offline.

Analytics tools often have different attribution models built in enabling comparison between last click, first click, time decay etc, which can be helpful in starting to see the bigger picture. For example, we may discover that a social post click preceded the lead/purchase click on paid search. So while we would have attributed ‘success’ to paid search, we now know that social had a role to play in the decision process.

More sophisticated attribution models can account for more variables, including impressions (i.e. what role did digital display advertising play in the decision process), and econometric modelling can go even further to weight in seasonal, economic and consumer factors. 

For marketers or business leaders without the luxury of sophisticated modelling however, it can be as simple as looking at a large enough sample of data to be able to see trends (perhaps annually), and looking at last-click data in the context of your marketing plan. For example, March might have been a strong month for conversions from Paid Search. Jan-Mar happened to be when the marketing plan included a TV flight and a sponsorship deal with a brand that supported a new content campaign on socials. Therefore can we assume that Paid Search performance was the reflection of increased brand awareness and consideration off the back of a larger campaign, therefore could ‘success’ be attributed to the wider campaign, rather than Paid Search alone?

Possibly, we’d need to ask a couple more questions first…

 

2. Failing to consider seasonality and comparative periods

 

Looking at marketing performance in a silo puts you at risk of assuming incorrect insights. Comparing similar data across different time/period segments helps add further context. 

Above, we found Paid Search performed better than usual for conversions and we think a broader campaign in market since January influenced those results. But first, look at previous years. Does March always produce stronger Paid Search performance? If so, this could suggest a seasonal trend, or some other consistent variable within the business that supports increased performance around March. Or was this an outlier, having produced incrementally better results than we usually see in March from this channel? If so, perhaps, seasonality aside, the campaign did have an impact.

To attempt to validate that hypothesis we could look at the same period in time over previous years. Exploring the same data points, using the same data definitions as we used above, we set the time period to Jan-Mar in as many previous years as possible. Some years we may have had a similar media mix in market, perhaps with different messages/creative, while some years we may have had a different campaign structure in market, or nothing at all. 

Categorising similar years together, i.e, those with similar media mixes, those with different media mixes, and those with different campaign structures entirely, did the campaign in question behave differently from other similar campaigns? Did it produce more conversions, the same or less? The answer to this helps us understand the role of the campaign vs the time of year in the results. If this campaign performed the same, or even worse than similar campaigns in previous years, then we can assume that it was not the campaign alone that helped Paid Search results improve. 

We could also look at similar campaign structures at other times of year and compare those results to help determine whether that campaign structure performs well regardless of time of year, or not.

Having done the above comparisons, we should now have a clearer view on to what degree the Jan-Mar campaign, and seasonality influenced the performance we saw in March. 

However there are other variables that we should consider before finalising our insights.

 

3. Forgetting to consider internal and external factors

 

Outside of what the marketing plan looked like beyond last-click attribution, or seasonality, were there any other variables that may help explain the performance in March?
Failing to consider these could lead us to gleaning insights that are half-baked.

Some of the questions we might ask ourselves here are:

    • How did our category behave in March? I.e. was there an overall category increase that we also experienced. More people in market requiring our product service due to broader economic or consumer behaviour shifts. We can check search trends, our competitors and buzz or mentions across socials to validate this.
    • Were all other company/internal variables consistent? I.e. our budgets were stable, we didn’t make any changes to the website that may have increased onsite conversion rates, there were no major sales or product developments or changes in process that may have improved customer satisfaction, word of mouth or referrals. If there were any significant internal changes, have any similar updates been made historically that we could look at uplift from as a comparison point?

By this point, we’ve gone from last-click attribution of ‘success’ in one channel, to a well-rounded view of performance overall, a place we can now confidently glean insights from and therefore take actions and make optimisations that are far more likely to drive actual business success. 

 

A final tip to ensure all the hard work is executed and communicated effectively

When finalising, presenting and discussing your findings, always be sure to explain what you’ve factored in and what assumptions and/or hypotheses you have. This helps others appreciate the effort and depth you’ve gone to, but also adds transparency to any insights. Later on, if you learn something new, for example a new external factor comes to light, you and your peers will be able to more easily understand how that may impact your insights and pivot accordingly, without spinning around in the ‘but what does this mean?’ and ‘why didn’t we factor that in originally’ whirlpools.

 

Happy data exploring and building your way up to better marketing.