'

Personalization at scale: What's the tech again?

How you go about finding and implementing the technology that allows you to personalize your interactions with thousands -- nay millions -- of customers? And what are the differences between data, information, knowledge and insight?

Video: SAP's new CRM takes aim at the Salesforce cloud

I'm in the midst of two blog posts that are taking me far longer than, as always, I meant to take, and the launch of a big website, so I decided to do something that I have never really done: Provide you with an excerpt from my new book on customer engagement. I'm not promoting the book per se -- though to some degree that's a ludicrous statement to make since I'm clearly telling you it's from my upcoming book and that qualifies as promotion. But what I'm primarily attempting to do, given you are my audience, is show you something that I've been thinking about: A brief summation on the technology needed to personalize at scale. This is a short piece -- unlike my usual -- and is only meant to touch on the technology for personalization. But its value lies in simplifying some things, not just the technology but, and I hope that you get this, the difference between data, information, knowledge, and insight. Also, something that describes the use of analytics.

Read also: Adobe: Experiencing the experiential (part one) | Adobe: In the weeds, in the zone (part two)

Some housekeeping, I'm leaving the chapter references in the excerpt as they are in the book. This is an exact replica. So there will be some missing things. Please don't ding me for them. The value here is in at the high-level sorting out what technology is needed to drive the insights for personalization at scale. I'm not including the data architecture, which is in the book, too. That's too deep.

Oh, yeah, the book is named The Commonwealth of Self-Interest: Customer Engagement, Business Benefit. It's 330 pages of a pretty hefty discussion in my usual inimitable style on how to build a customer-engaged company from the frameworks to the operational core, to the strategy, to the programs, to the use of technology, to the metrics, and to define, especially, the culture, which, by the way, differs from a simply customer-centric culture.

In any case, ladies and gents, the excerpt. Feel free to comment. I welcome them.


Personalization at Scale: The Technology

In an era where engagement is a key strategy and where customers expect you to provide them with what they need to do whatever it is they are doing with you, data and insight become incredibly important for providing a personalized experience. But more than just identifying the products, services, tools and consumable experiences necessary for the customer to sculpt their own choice of engagement with the company, businesses are now in a position where they must anticipate customer behavior and then develop optimized offers or programs for those individual customers in real time or close to that. While algorithms never determine those insights, the data, analyzed and presented in the right way, are the basis for the insights. How can these insights be used?

The difficulties inherent in the identification of an insight, are increased by magnitudes when two other factors come into play. Scale - the size of your customer base and speed - the expectation that the action suggested by the insight will be taken live rather than later, more often than not - add complexity to a situation that was already complex. As the customers are on their thousands and even millions of journeys, how do you find out what they are doing, interpret what you know about their activities, decide how you are going to respond to them, build the response and communicate that response - all while the customers are still on their various journeys. Equally importantly, how are you going to do this in a meaningful individual way for thousands, maybe even millions of people that way?

This is the dilemma of personalization at scale. Yet, because I know you've memorized every word you've read so far, you don't have to go back to Chapter 4 to be reminded of the benefits of this. Regardless, though, it is what you have to do and the technology is there to do it. Memorize that.

From Data to Information to Knowledge to Insight

Data is useless. No, I'm not kidding. Data is useless - until it is used in context. Think of it this way. How often has someone told you something and your response, if you weren't being nice, was "so what? Why'd you tell me that?" Whoever told you that, of course, thought they were telling you something of importance. But you had no idea why they told you that, right? The person who told you the "thing" had context - they had a reason to tell you. But you had no context - and thus didn't understand what (or why) you were being told.

What you received with that "thing" was information without context and thus no meaningful actions could be taken, and there was no way to truly understand what they meant. That is data. It becomes information when you give it context. It becomes knowledge when you define what the information's value is to you and its purpose. It is insight when you figure out how to use it effectively.

Think of it this way:

Data:

1. 17.5 ounces white flour, plus extra for dusting

2. 2 tsp. salt

3. ¼ ounce fast-action yeast

4. 3 tbsp. olive oil

5. 10.5 fluid ounces water

Information:

This is a recipe for making bread

Knowledge:

Here's how you make the bread

Insight:

Judging from the comments on those who ate the bread made with this recipe, this is a good tasting bread that can be made even more healthy and taste equally good with multigrain flour instead of white flour, but it would need 1 tbsp. more olive oil.

Of course, what isn't easy is to figure out how to take what is data and make it useful knowledge and ultimately gain some actionable insights. The technologies are there to parse the data and then run the algorithms that provide you with what you need to see to gain the insights - as long as you are not overwhelmed by the amounts of data available to you. Scale can be frightening. But the approach to handling big data coming at you at high velocity is simple - take control of it. Follow these steps so that you can gain value from the data you have:

  1. Don't treat it as Big Data
  2. You want something from it;
  3. Decide what it is you're looking for;
  4. Develop a hypothesis;
  5. Decide on what specific information is going to be needed;
  6. Plan accordingly;
  7. Gather the information - which means find the data, organize it and build the reports that provide it to you in the form you need;
  8. Run the analytics on the data - the analytics can be descriptive, predictive, proscriptive (more on that shortly);
  9. Use the knowledge you have to produce the insights you need;
  10. Apply those insights (e.g. an optimized offer to a particular customer).

Good Analysis Means No Therapy

Before we really get into analytics, I'm going to stop here and take a breath. I have to focus. No, not because I'm losing it, though I have been accused of that more than once, but because we're only going to be talking about customer engagement related analytics here, not analytics in general. I'm guessing you have a pretty good idea of what analytics are. If not, read the excellent Thomas Davenport article "Analytics 3.0" in the December 2013 issue of Harvard Business Review (32Thttps://hbr.org/2013/12/analytics-3032T). It's worth your time. Then read this chapter, though it's pretty self-explanatory.

Can Algorithms Show Your Love...if that's what you think it is?

Given the large number of data types and the sources and difficulty with providing clarity, how do we use analytics to find the optimal customer engagement actions we have to take? What are we analyzing?

If you think about it, the science of customer engagement is the attempt to systematize how humans interact, create a methodology to make the efforts, processes and results repeatable and reusable so they can be applied to any size of endeavor. Ultimately to be able to make that happen, you need to recognize its greatest limitation - at best, the effort can reproduce the model of an approximation of how humans interact - and thus give you the basis for insights on how to anticipate future human behavior - either enmasse or at an individual level. But that predicted behavior is also at best an reasonable assumption, no more. Why only, effectively, a good guess? First because, in real life, each human interacts with other humans and institutions in a different way than every other human. That means that for example, I interact with you, differently than I interact with your brother and he interacts with me differently than you do and you interact with him differently than you interact with me etc. even if the interactions are for the same purpose. What? Yeah, I know your brother. Moving on. To add nuance, additionally, while there are an infinite number of interactions possible, each human has a set of constraints that exist to limit how they can respond. They are constrained by their individual bandwidth, which in much simpler terms, means, no one human can respond to everyone they have the opportunity to interact with in a given time frame. So they have to decide who they are going to interact with and, given many considerations, of the infinite number of ways they can interact with someone or something, how they are going to interact. That means, if the interaction is with your business, the customer has a reason they chose to interact with your business and not someone else's. They have a context for how they chose to interact with you. So a single interaction out of context might end up with the wrong response from you because you didn't understand why they interacted that way.

Confused? Let's revisit an example from back in Chapter 6 - but come at it differently.

Remember the example of a purchase attempt made by you on Amazon in a normal frame of mind contrasted with one made in an irritated state of mind due to a fight with a significant other? If you remember, the item in the cart was temporarily abandoned in the first scenario and permanently abandoned in the second. But in the two-hour window before you completed the purchase, without any context, there was no difference in the actual action taken of leaving the item in the cart - but the context (not irritated, and irritated) determined the final outcome - which was different in each case, due to the customer's behavior which was dictated by their emotional state.

Now let's throw in a twist - and add predictive analytics to the mix (along with social listening). Amazon would like to be able to anticipate the outcome of your actions based on past behavior, the behavior of people like you, etc. If the only data they have is your transactional data, then there isn't much they can really figure out because the propensity to purchase that book is based on things like, books like that you bought versus books you've considered and abandoned, etc. But that can't account for your bad day or your emotional state. So, as I said, in chapter 6, their only recourse is to fix the lag since they have no idea why you were so upset - and that they should do even if they did know.

But imagine if you were "vocal" about that bad state of mind on social media or you even more specifically tweeted "#Amazon lag time is driving me nuts - and so are all of you! #drivemecrazy #arrrggh" If they had that data and had incorporated it into the data they were looking at, it might suggest a reason that the cart was apparently abandoned - and that is wasn't just the lag time that caused it. It gives the data some potential context and thus it might give you a better indication of future behavior. Though, decisions do have to be made whether it is worth the effort to find that out - or to just fix the lag.

As we established in Chapter 4, human beings are self-interested, so they are expecting a response that appeals to them as individuals, not a group. Which means understanding individual and segmented behavior. Even though the customer demands individual attention, you still have to understand their common behaviors and their similar interests (they don't have to be identical) which allows you to effectively create responses that can appeal to a larger group at the level of the individual.

Predictive Analytics and The Best Next Analytics: Prescriptive

Applying analytics allows you to make the best interpretation of the data you have. If you are using predictive methodologies and algorithms, you can make a best guess - often correctly - as to how individual customers will behave in a specific situation - e.g. a marketing campaign. You can, using prescriptive analytics decide what your best next action will be in, for example, a sales opportunity. Use this presentation or make this the offer or speak to this influencer of that decision maker. Companies like Lattice Engines do that for sales and marketing, like Thunderhead do that for customer journeys, like Pegasystems also for marketing, sales and customer service. As an example, PROS, a Houston, Texas based technology company, optimizes pricing to produce anything from a quote for industrial equipment to dynamic prices for airline seats. PROS uses advanced algorithms that work in real time to look at the demand, history, external conditions, airline industry comparisons, weather and a whole variety of other factors that go into the price of your airline seat.

To provide optimal engagement, the analytics models have to take into account more than the transactional history. The model has to account for the known behaviors of the customers, the preferences and the tastes of those customers. The models have to account for the demographic and geographic differences that are reflected in the individual behavior.

To give you a bit more of the technological picture, I'm going to give you a few examples of the kinds of models that are used for engagement analytics.

The model

I'm not a data scientist. I can't pretend to get into the math that goes into building these three analytics models, but I can give you the types of models that are appropriate to developing queries around customer engagement. Let's look at three. One note: they are often used in tandem.

1. Clustering analysis - Cluster analysis is an appropriately named model that is designed to take the survey samples and surface groups that organically have similar affinities. In principal, it's a very simple model. Similarly answered surveys lead to the creation of a group populated by those who gave the similar answers. But it actually takes a lot of work to build the model especially when the survey questions are focused around attitude or behavior. So, there has to be criteria established for what "similarity" is. The results are not the same as segmentation. Its not segmentation by age, gender, location, or job status. The results of clustering lead to groups like those mentioned below in the Telenor case study - Sure Things, Persuadables, Lost Causes, and Sleeping Dogs. I'll leave it to you to read the case study to find out what they mean but they are grouped by likely behaviors.

2. Propensity models - This is a commonly used model. You've probably heard it most frequently as "propensity to buy." All in all it means that you are developing models to identify customers who are likely to do or not do something. It can be likely to churn, likely to respond to a particular offer, likely to do x if y,z, and ab are presented or occur. Typically, there is a score associated with the likelihood.

3. Collaborative filtering - The likely way you've seen the results of collaborative filtering are via recommendation engines. Along those lines, "because of your past transactions, your current web behavior, the purchasing and viewing behaviors of others who have similar tastes and preferences to you, we recommend this first, this second, and this third." recommend this In the newer, narrower sense, collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of collaborative filtering is the same as the concept of a "person like me." Give the similar tastes and preferences of person A and person B, the likely of their similarity on other things is greater than a random other person.

I'm not going to dwell on the models any further, but these are useful to see the constructs that drive the analytics.


Well, that's the excerpt. Thoughts? I'd like to get some feedback. This is about as technical as the book gets.

See ya.

Previous and related coverage:

Customer experience: Lessons from the real world

CX expert, Rich Toohey, a regular contributor to this column does something important again by pointing to the value of design when it comes to customer experiences.

SAP pivots as the customer turns 2018

SAP announced what is probably the most significant initiative as a company since their initial creation of SAP ERP R/2 back in the stone age. They are now customer facing, focused on customer experience, and are rearchitecting their offering around that idea. Right move? Wrong move? Read on and find out.

ServiceNow: Solid and uncertain -- a company in transition

ServiceNow is a fascinating company that has been a dramatic and very pleasant success for the past decade plus. It's now at a pivotal point in its history, and its the kind of pivot that gets it to its optimal goals or it just keeps moving along. Can ServiceNow do it? Read and find out.