TalkThree’s new Analytics Director, Michael, has had a sobering month. What he had hoped would be his first major contribution to his company has fallen flat. His team created a model which was intended to address a pressing challenge for TalkThree: a steady stream of departing cellular phone service customers, known as “churners.” Their model predicts who is most likely to leave, and though he delivered it enthusiastically, it received an unexpectedly lukewarm reception from the customer retention team. The churn solution was a poorly conceived data product that didn’t reach its audience in a way that worked for them.
Michael’s Tale, Continued (Read Part 1)
Michael has spent the last few weeks working with Lanny, the customer retention team leader, to cobble together a business process for using the model’s results. On this Tuesday morning, Michael’s phone rings early. Melissa’s voice greets him. “Good morning, Michael!” Melissa is TalkThree’s CEO.
“Hi Melissa. How are you this morning?” Michael replies nervously.
“I’m well. I have a special assignment for you this week. Lanny has enjoyed collaborating with you on the business process for your team’s model. Now, I want you to spend a day sitting with Tiffany Jones from the retention team and observing her workflow. I suspect it will be an eye-opening experience for both of you.”
Michael sighs deeply and asks Melissa if she is sure. How could this be worth an entire day?
Deep down, Michael knows he is simply afraid. Will he be exposed again? How inconsiderate and oblivious will he appear this time?
The next morning, Michael meets Tiffany at her cubicle.
“Hi, I’m Michael. I’m the—”
“Oh yeah, you’re that guy who came and talked to us last month. What’s that thing you do called? ‘Prenatal apparatus’?”
“I think you’re going for predictive analytics.”
“Well anyway, I hear we’re buddies for the day. Lanny dropped off this chair for you.”
“Thanks. Let me know if I get in your way. I see you’ve got some of the leads from our model there.”
“Yeah, where does this crap even come from? How can you know who’s gonna churn with just numbers? You must be a genius.”
“Well, not really. The computer does a lot of the hard work for us.”
“Well, see if you can get the computer to do something about this. I’m going to pick the first number from your list and show you what happens…”
Michael watches in amazement as a woman with a crying baby in the background answers Tiffany’s call and immediately uses the opportunity to cancel her service, despite Tiffany’s offer of a free month of service.
Tiffany sighs. “I knew that would happen, and so did your little computer. But our phone call just helped it happen. You can’t wake the sleeping dog like that, man. That doesn’t make me or the computer look good.”
“I’m sorry, Tiffany. I didn’t know this would happen. I’m going to work on this some more.”
“Yeah, I bet you will. Other people have had this happen too… personally, I think there are some of these people we shouldn’t be calling. But you’re the smart one. See what you and your computer friends come up with.”
The next morning, after poring over what he’d seen at Tiffany’s desk, Michael calls his team into the meeting room. They’ve just discovered a new conference paper about random forest algorithms and are busy discussing it via instant message from within the same room, but they reluctantly break away and join Michael.
“Yesterday I spent the day with Tiffany from the customer retention team. I watched the first customer she called from our list of results churn when offered a free month of service. Tiffany even had a good name for this: the sleeping dog effect. I’ve been thinking about our situation with this model, and I think we need to question some of our assumptions about process.”
“Right now, the retention team is applying the same treatment for every single churner our model predicts. The implications of this hit me as I watched Tiffany work.”
“First, we don’t have a way of deciding when to treat or not treat. There are two levels of information here: whether a customer is likely to churn, and whether they are likely to respond to treatment. We are only operating on one level.”
“Second, we don’t have a way to measure how effective our model is compared to an alternate reality where we’re not applying our model. If we enjoy our jobs, it’s in our interest to be able to share with Melissa how effective our models are and affirm her decision to hire us. I think we made a mistake by not planning for a control group. What do you guys think?”
Leonard, the team’s Super Smash Bros. champion, speaks up. “Yeah, so Radcliffe and Surry wrote the book on this. They call what we’re doing ‘uplift modeling’. Their work critiques the application of traditional ‘response modeling’ to certain contexts where it exhibits shortcomings, like customer persuasion. They recommend a control group, just like you said. They also recommend using something called an incremental gains chart to illustrate your model’s impact. There’s a lot of great material on this out there.”
“And when did this connection occur to you, Leonard?”
“Oh, just now. I was focusing on our modeling algorithms and the analytic base table for pretty much the whole project until this meeting.”
“Great. Well, I’m convinced this is a direction we need to follow. It’ll not only make our predictions better, but will give our team firmer justification for the role we play within TalkThree. Thanks everyone.”
Later that afternoon, Michael smiled at Tiffany in the hallway and thanked her too.
What Can We Learn
Again, this episode from Michael’s story is fiction, but we can learn from its characters and their decisions.
Here are some points to notice:
- Melissa gave Michael a huge gift by having him work side-by-side with colleagues who are on the front line and learn more about the customers directly.
- Michael’s model was successfully predicting churn. The team’s problem was a ripple effect of their technical success in predicting their target variable.
- Michael and Leonard didn’t discuss—or maybe didn’t realize yet—all the benefits of the uplift modeling approach. It will give them pure, untreated holdout data sets they can use for future modeling. They’ll also be able to monitor their model’s performance and discover if and when it begins to lose its power to predict.
- Michael’s team has begun to learn to provide their internal clients not just with predictions, but an action plan.
And here are some questions analytics teams can ask themselves as they seek to help internal or external clients succeed with the power of data science:
- If our model is actually accurate in real life, what will its side effects be?
- How can we convey the power of our work using the language of our clients and stakeholders?
- How will our clients use our model? It’s not enough for them to know an event will likely happen; they need to have a plan so they can act.
- What will happen to our model if conditions change, reducing its effectiveness? How quickly will we know?
Even with technical success analytic projects hold potential for pitfalls. How we manage and measure our success helps us serve our clients more effectively and convey our success to those who put their trust in the work we do.
Editor’s Note: This story is fiction, but it is based on experience with real clients. Any resemblance to people you know is incidental.