As a researcher, you’ll probably be familiar with the halo effect, a cognitive bias whereby positive impressions of a brand in one area, positively influences our opinion of other elements of the brand.  And yet, when conducting concept testing research, we are all guilty of asking a range of very similar questions, which ultimately measure little more than liking or appeal.

Indeed, we expend significant energy trying to craft the ideal question, endlessly wordsmithing in an attempt to capture the extent to which a visual concept will influence behaviour, forgetting that, whilst advertising clearly influences behaviour, it does so at a level that most of us are unconscious of.  And thus, asking someone how motivated they are to prescribe a medication based on a visual concept alone, can only ever provide part of the answer.

HRW therefore embarked upon a programme of self-funded research to identify better ways of assessing concept performance.  Specifically, we wanted to develop an approach founded in the principles of behavioural science and psychological thinking; an approach which takes account of concept appeal, but which moves beyond mere liking to focus on aspects of a communication proven to influence perceptions.

Sticky ideas
In 2007, Chip and Dan Heath developed a framework they refer to as ‘Sticky Ideas’ in which they codify six key principles which are required to make an idea, message or communication ‘sticky’.  They provide numerous examples which reinforce how these principles work in practice and this has enabled us to develop metrics which better measure aspects of the communication likely to drive behaviour in the future.

Through two phases of self-funded research, we have been able to develop several quantitative metrics aligned to the concept of sticky ideas.  Ideas which are:

  • Simple
  • Unexpected
  • Concrete
  • Credible
  • Emotional
  • (And tell a) Story

These metrics were developed with the input of our behavioural science team and in our first phase of research we conducted deep dive statistical analysis to understand whether these metrics measured something above and beyond liking.  Mapping techniques allowed us to visualise the different metrics and clearly identify the different territories owned by each.  Whilst there is some natural connection between certain metrics, it was clear that this expanded list of performance indicators provided a much more holistic assessment of how well the concepts perform, and specifically, where they need to improve.

Anti-smoking campaign assessment
In the second phase of research, we were keen to assess healthcare related concepts and chose three different executions of anti-smoking campaigns to test amongst smokers.  Here, we were aiming to fine-tune our approach and to understand how the three very different campaigns resonated with smokers.

The three concepts tested were:

Lung Campaign

Cigarette Campaign

Spider Campaign

Just for fun, we also reached out to researchers within our business, to see whether they could identify the ‘winning’ concept. The results showed consistency with our study results.

In the study with smokers, we found that the preferred concept was the lung campaign.  However, enhanced analysis of the sticky metrics showed that this supposedly ‘winning’ concept failed to deliver on core aspects. For example, the campaign fell short on the ‘unexpected’ metric; an essential component to capture attention in the first place.  Furthermore, the apparent positivity communicated via the visual representation of the healthy lung drove much of the appeal, but behavioural analysis indicates that this lack of challenge may actually result in limited behaviour change.

Conversely, the sticky metrics helped us to understand the impact of the less preferred concepts, which delivered well against certain sticky criteria: generating more of a story, being more unexpected or provoking more of an emotional response.  Overall, it was clear that each execution had its own strengths and weaknesses, and that preference alone wasn’t enough to measure likely success.

To elevate findings further, we also used a chatbot within the survey.  The chatbot is a conversational AI companion who prompts respondents for more information based on the answers they provide in the survey, adding greater depth to the insight gathered and allowing us to further validate and explain the results we observe.

By tracking common words in conversation tags, we found that the lung campaign was hopeful, encouraging and had the greatest positive sentiment, supporting the finding that appeal was driven by the positivity associated with the campaign.  We also saw that the cigarette campaign, which rated most highly on the emotional metric, also had the greatest negative sentiment, generating a wide lexicon of terminology and reinforcing how strongly the concept resonated, even if the prevailing feeling was negative.  At face value, the cigarette campaign might be rejected as a potential route, when in fact this campaign still has to potential to create impact through discomfort.

Click here to read our chatbot blog.

Based on our self-funded research programme, we firmly believe that concept testing needs to change.  The sticky metrics we have developed provide us with a solid framework to better assess concept performance, allowing us to review concepts more holistically and most importantly, against criteria which are known to shift behaviour.

If you’d like to learn more about sticky metrics, please get in touch: N.Vyas@hrwhealthcare.com or M.Knowles@hrwhealthcare.com.

By Nicola Vyas and Madeleine Knowles



Apply Now!

Get in touch:

    I would like to receive news and information from HRW
    I agree to the HRW privacy policy

      Sign up to our newsletter

      If you wish to hear about our latest blogs, podcasts, webinars, and team news, simply enter your email to sign up to our monthly newsletter.


      We use cookies to give you the best possible experience. We can also use it to analyze the behavior of users in order to continuously improve the website for you. Data Policy

      analytical Essential