If you’re a customer sat specialist reading this, you’re probably looking at the title incredulously thinking, “the term ‘scoring’ doesn’t mean anything.” Well, that is partially true, but it’s not total clickbait. Bear with me through the “But how does it all work, man?” section and you’ll see why I chose that word. In a world of ever bifurcating customer cohorts, we need to relentlessly push to quantify our measures for each cohort and each customer.
According to “Market-Based Management” by Roger J. Best, 90% of dissatisfied customers don’t complain, and of that cohort, only 22% are retained. If we’re able to identify why these customers are dissatisfied, we can begin to implement intervention measures to mitigate these losses. And silent complainers are just one of potentially millions (Literally. Think about Amazon’s cohort models, which span 310 million customers) of customer cohorts in a large organization. Designing and implementing quantitative customer sat metrics which span an organization’s constituency is a vague problem statement, but it can be attacked iteratively, servicing little bites at a time.
The natural next step to attack is intervention tactics. Once you know which customers are unhappy (or which are happy, for that matter), what do you do next to increase their engagement with the company? Let’s keep in mind here that “next, we do nothing for this customer” is an acceptable action. Doing nothing costs nothing, and that option mustn’t be overlooked. Intervention often lowers margin to the point of insolvency, and it ain’t possible to make up negative margin in volume. I’ll use the term “scoring” here again; we must calculate scores for every intervention tactic for every customer, and every customer cohort to make optimal business decisions.
Figure 1: dissatisfied customer intervention tactics analysis by deep learning system. The output subgraph shown is the satisfaction scored CIG with blue highlights on dissatisfied customers.
But how does it all work, man?
Get ready for the gory details. In this section I’ll outline the process used to perform the analysis I mentioned above. If you’re not into technical/computery/mathish things, feel free to jump to the end. The tl;dr here is that we build a customer interaction graph, perform deep learning on it, and route the information we’ve just predicted back into the customer graph for the customer sat team to analyze. Now that I’ve finished the disclaimer, let’s get into it.
To build customer satisfaction scores, we first need to represent the company’s customers in a meaningful way. The most beneficial way of doing this is by building the customer interaction graph (CIG). The CIG is a graph with all of the company’s customers represented as nodes, all of the company-customer interactions represented as nodes with alternative node properties, and time-stamp/score weighted edges linking the customer nodes to the interaction nodes. There are also edges linking the customers together by some similarity measure, the calculation of which is out of the scope of this article. Suffice it to say that the connections between the customers link intra-cohort customers densely, and the inter-cohort connections connect them sparsely.
In many cases we won’t have a clue as to the edge property “score” a priori. This score can be built by using sentiment analysis on the record of interaction for each customer-company interaction. Sentiment analysis can be performed on text (online chat or email), voice (phone calls), or even video (video reviews of products). This enables sentiment scoring to be automatically populated into the CIG.
Figure 2: customer interaction sentiment scoring, where we propagate information about known satisfaction scores (through sentiment analysis) onto similar customers without company interaction data.
The information we’re interested in is some measure of customer satisfaction. This metric will be a property in all known customer nodes. Many of the customer nodes will have blank fields for their customer satisfaction property. These are the fields we aim to predict with our deep learning system.
The population (and sometimes propagation) of known customer sat metrics can be built an a huge variety of ways, based on what the company is interested in learning. For example, this metric could be a weighted average of the sentiment calculated across all of their interactions with the company. Alternatively, it could be a binary metric calculated as a 1 if the customer has already churned or a 0 if they’re still with the company. Likely, a more sophisticated metric is required, and again, this metric will be calculated according to the company’s specifications and data holdings. The generality of this metric is why I’ve simply called it a “score.”
The analysis is performed using a graph-based deep neural network. The training labels correspond to the known customer sat scores in the CIG. The unknown satisfaction labels are what’s predicted by the deep learning model by masking these node properties (which, again, are NaNs before prediction time), with a non-linear node mask as zeros. Once the customer sat metrics have been predicted by the model, we project them back as node properties into the CIG.
Note here that this analysis can likewise be performed on customer cohorts by building a customer supernode graph, where each supernode is built by a clustering algorithm. Similar customers will be projected into supernodes by their community membership calculated by this clustering algorithm, which could be any number of unsupervised learning algorithm, or a deterministic algorithm like Louvain community detection.
Buttoning Up The Use Case
Computers shouldn’t do it all. Once we’ve done our machine learning thing from the section above, we need to alert a human being to take action. To do this we’ll pipe the predictions we’ve made back into the CIG. There, our predictions will be highlighted as percentages or multiclass scores in a UI built for the customer sat team. Low satisfaction scores can be highlighted in garish red letters indicating that some intervention measure should be taken with this particular customer (or customer cohort).
Figure 3: A commercial real estate customer interaction graph with risk scores in the upper right hand corner.
I mentioned intervention tactics in the opening section. This blog post is getting quite long, so I’ll have to put Expero’s customer sat intervention technical discussion in another post, but I’ll summarize it here:
Using an optimization algorithm, we can build an intervention tactic ranking system which predicts the effectiveness of each intervention tactic. We then display this effectiveness prediction in the customer sat UI with each tactics’ total cost to the company.
All of this analysis provides a way for your customer satisfaction team to quickly evaluate all of your customers (or customer cohorts) in a robust dashboard. Because the customer interaction graph is displayed in this dashboard, it’s trivial for your team to get a full snapshot of each customer’s interactions with the company, and to decide which intervention measures to apply to which customers.