We’re told that “all advertising is good advertising”, but common sense tells us that this just isn’t true. For those of us working in the world of marketing and market research, we constantly strive to understand how customers react to product communications, knowing that advertising has the power to persuade, but also the power to dissuade, or even damage a brands’ reputation.
Right at the start of a research project when we’re first exposed to the concept ideas, I’m already thinking about which might be the winner and why? Sometimes I’m right, other times not; because it’s simply not possible to select a winning concept route without the input of potential customers, and one of the reasons for this, is that success is not just about assessing how appealing a concept is. A successful concept needs to have standout, a clear and resonate message, and critically, it needs to stick in the minds of your target audience.
But there’s a problem. Even though most of us are aware that we need to move beyond assessing concept appeal, many of the alternative metrics we use (such as motivation to prescribe, or perceived credibility) essentially assess the same thing. When a concept is liked, it scores well across all metrics and the halo of product liking is impossible for a respondent to even be aware of. For example, if you like something, you assume it will be more memorable and whilst this is sometimes true, there is no guarantee. Indeed, more disruptive ideas often have greater long-term resonance simply because they challenge existing thinking. The Government produced AIDS awareness ‘tombstone’ advert is a clear example of a memorable yet disruptive approach.
What then is the solution? Is there a way we can measure concept impact more realistically? How can we be certain that we’re not just measuring appeal in different guise?
At HRW we utilise the Sticky Ideas framework developed by Chip and Dan Heath, in their book ‘Made to Stick: Why some ideas survive, and others die’. The framework is based on six principles that need to be delivered by an idea for that idea to have impact. These are:
The challenge here is to ensure that we have accurate metrics to assess each of these principles, and that’s why HRW have embarked on a quantitative self-funded study to develop and fine-tune the questions we ask to best understand ‘stickiness’.
The first stage of our research has just completed and we’re in the process of analysing the results. Early findings are proving very interesting and reinforce how important it is to move away from standard measures of appeal to a more holistic view of concept assessment. At an overall level, we found that whilst neither of the two concepts we tested was significantly preferred, there were clear reasons to prioritise one concept over the other when considering the sticky metrics. In addition, we’ve identified that certain question wording and particular question types provide us with more powerful and discriminating metrics.
There’s a lot more that we’ve learnt through this research and with ongoing analysis and further studies planned, we plan to share that over the coming months. For now, I’ll leave you with some emerging insights, hypotheses and questions which I hope will demonstrate the importance of challenging the way we think, perhaps not just about concept research, but also about the precise way we frame and administer questions to gather more realistic insight.
- Subtle wording changes produce different responses. Asking whether something is ‘unexpected’ is different than asking whether is ‘surprising’
- Once respondents have seen an idea it’s difficult for them to pinpoint any areas of newness. A well-received idea may feel comfortable and intuitive and therefore any surprise may be quickly forgotten. Asking respondents to think first about what they are expecting to see allows them to better consider whether an idea is really new or different
- More traditional question formats provide reliable responses, but fast associations provide amplified reactions. Keeping in mind that advertising needs to be absorbed ‘in a glance’, our faster gut reactions and implicit emotions may be a better indicator versus more logical and considered answers
- Stated memorability can be flawed. Anecdotal responses showed that memorability wasn’t necessarily aligned to performance on this metric
- Assessing the extent to which emotions are evoked when viewing concepts is difficult to capture. We found that by asking respondents about ‘other peoples’ reactions provided more discriminating data
I’m looking forward to sharing more insight on this topic as it emerges, so watch this space.
By Nicola Vyas