One of the current buzzwords flying around our industry at the moment (despite being over 70 years old) is predictive analytics. So, what’s the big deal, and why are we only just beginning to give it the attention and appreciation it deserves if it’s older than most of us are now? Hold onto your top hats, do up your top buttons and strap yourselves in, as our very own ‘Jazalytics’ takes you on the wild ride to teach you what this industry enigma is, how it works, and what it can bring to your business.
Effectively, predictive analytics has been around ever since computing power has in the early 1940s. The most basic requirement of predictive analytics is to utilise historic data to inform forward looking decisions (using past trends to predict the future). We can cast our minds back to Alan Turing and I. J. Good’s revolutionary work with the Enigma machine to decode WWII messages, to see early examples of this. These British code breakers provided us with the world’s first computation engine whilst simultaneously paving the way for predictive analytics, using existing military messages as a basis to develop an algorithm which would unveil common words and use these as the cypher to unpick entire stings of encoded prose. By the 1950s, computers, such as ENIAC (Electronic Numerical Integrator and Computer), were able to generate models to use weather records to predict weather forecasts. Working off their application of predictive modelling on credit risk in the 1950s, FICO (an analytics software company) rolled out real-time analytics to counteract credit card fraud in the 1990s. Using examples from recent memory, you may or may not know that Google’s original innovation was to develop its own algorithm in order to maximise the relevance of search results using past search patterns. At the same time in the sporting arena, the Oakland A’s shattered the baseball scene by using existing player performance statistics to predict the right team composition for great competitive benefit with a bit of wizardry, later dramatized in the book and film “Moneyball”
Now we are in present day, with big data at our fingertips, cloud computing making collection of data and collaboration between data users even simpler. We’ve evolved from the days of only being able to crunch numerical data, with natural language processing unveiling a previously untapped resource of unstructured data. And computing power continues to increase to a level that the founders couldn’t have imagined – with server farms and high-speed processing ubiquitous throughout industry. Above all, more than ever the world possesses a highly skilled and literate workforce with the knowledge of how to leverage all this potential.
SAS, one of our industry’s main software providers for advanced analytics and business intelligence summed up the definition of predictive analytics best:
“Predictive analytics is the use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal is to go beyond knowing what has happened to providing a best assessment of what will happen in the future”.
So, why has it taken the world so long to cotton on to this now booming trend. The seductive power of thinking that the best predictor of future behaviour is ‘stated intention of future behaviour’, can sometimes lead us astray. Whilst every situation is unique, and it is important to explore the ‘rationale’ for decisions, we need to make that past behaviour data work harder for us, link consequence to cause and extrapolate the effect this could have. So, we need to make a conscious effort to begin leveraging machine learning algorithms, statistical modelling, AI, and data mining to its full potential (particularly given our recent advent of GDPR restricting us from holding on to data for too long).
Let us dive deeper yet into the rabbit hole and inspect a few of the agents of predictive analytics that hold the most potential:
Predicting realistic market share:
- Regression analysis – this approach aims to ascertain correlations within the data between certain variables (usually in relation to a purchase intent), in order to recognise which attributes or tendencies drive purchase intent as a predictor for likelihood to purchase in future.
- Propensity models – used to make probability predictions on likely consumer behaviour. A very topline application of this would be a Net Promoter Score, giving us a validated and reliable indication of the consumers’ sentiment based around likelihood to recommend. At HRW, we take this a step further and use it in conjunction with our Early Share Estimation Technique ESET™ as a calibration factor for overclaim of future prescribing intent. In addition, we map part worth utilities from a discrete choice method task onto shares of preference for variable model simulations based on appeal of current clinical endpoint inputs. This is our Predict™ methodology, and accesses the reality of future shares of preference based on current product factors that drive or limit use.
Predicting what messages or data will be effective
- Collaborative filtering – this is what you most commonly encounter with large online providers such as Amazon, Facebook, and Netflix. It leverages your past browsing behaviour patterns (those sneaky cookies) in order to recommend products or services to you (targeted advertisement).
- Cluster modelling – yes, that’s right: segmentation, ladies and gents. This is the application of some basic statistical analysis in order to segregate your customer base into distinct groups based on their receptivity to different messages, allowing marketers to apply personalised messaging to segments. This can be emotional, attitudinal or behavioural, dependent on your business needs. At HRW we have developed an approach which harmonises all of these into a single model and also enables the model to be analyst driven as opposed to purely data driven, our Attitudinal segmentation™.
Predicting how different words or tone will perform
- Text analytics and natural language processing – Text analytics has imbibed new vigour into its soul through the advent of social media. This wealth of customer voice and opinion is invaluable to major corporations who want to get to know their audience better. It began with simple divisions, such as sentiment analysis which try to assess tonality and opinion through posts but is fairly limited and often misses uses of sarcasm or oxymoronic speech. What’s exciting now are other techniques, such as topic modelling, where large bodies of text can be condensed into dominant themes, or term frequency which, as it suggests, assesses how often a word appears along with its importance to a vast array of text, can begin to build a far richer picture. We can help companies further refine their results and associations of text analytics with techniques that can identify the relationship between certain nouns in a string of text to provide additional context and meaning to our focus- assigning roles to entities, assign subtypes, or linking to semantic data. Creating this understanding allows us to use existing patterns of customer voices to test concepts or words and ‘predict’ how a particular term or tone will resonate based on existing patterns.
As we use our past to predict our future, one thing is clear – the rapid expansion of predictive analytics from code breaking in the 1940s to 2018 marketing and market research tells us that we’re only just beginning. The critical mass of technology, methodologies, and brainpower is ripe for utilisation and we’re excited to see if we’re right.
To hear more about our predictive analytics capabilities or predictions, get in touch.
By Jaz Gill