Predicting Gratefulness: Machine Learning for Human Behavior

Author:

Will Goodrum

Date Published:
August 4, 2017

Machine learning has many strengths. Predictive models can synthesize information from millions of disparate cases and identify patterns that would otherwise pass undetected. These patterns lead to inferred insights from the data that can surpass human judgment. The potential value from predicting human behaviors before they happen is exciting to businesses and government agencies. Imagine having the foresight to know which of your customers are likely to churn, which of your providers have a high likelihood of making fraudulent claims, or which of your patients are grateful enough to donate to your hospital foundation?

For all its strengths, predictive analytics has weaknesses—blind spots really—that intersect with the reality of our human experience. A machine learning algorithm only knows about the data it has seen. Human behavior may have deeply personal motivations that remain hidden from collected data. The underlying ideas may be biased or unpredictable. For example, a charity may know that a prospect has the potential to make a major contribution, but not know the appropriate time to approach them about it. Is it possible to overcome these shortcomings and make actionable predictions for unpredictable human beings?

The Human Problems of Machine Learning

The outcome of many predictive models is a probability that an event will occur; e.g., the propensity that an actor will take a given action. Maybe the action is committing fraud, or submitting a medical claim, or making a 3-pointer. The more likely an outcome is, the more confident decision makers feel in selecting a given course of action. That is the power and attraction of machine learning.

There are many things about the world not explicitly represented in the data so analytic models may always contain bias. Humans impart this bias to our data and models, whether intentionally or unintentionally. Daniel Kahenman calls this mistaken belief in the completeness of our information “WYSIATI” (What You See Is All There Is). Christensen et al. highlighted a related limitation of algorithmic methods in a recent Harvard Business Review article. They pointed out that though the author is 6 foot 8 inches tall, is 64 years old, and drives a Honda minivan, none of this information explains why he purchased the New York Times. It only correlates with why he bought it.

As much as machine learning algorithms can identify the propensity for someone to do something, they say little about causality—why someone acts. Sometimes we need to apply novel and innovative solutions. For example, Elder Research data scientists combined elements of psychological diagnostics with data analysis to predict the likelihood that terror suspects may become radicalized.

Machine learning can also predict the environment or likelihood that an individual will take an action, given the right trigger. Recently, Elder Research developed a propensity model for the philanthropic foundation of a major-medical center. The client was seeking to increase donations by identifying prospective donors from within the recent patient population who have a high propensity of “beginning a philanthropic relationship” with the medical center. A key indicator of giving propensity is gratefulness, inspired by a positive health outcome and/or treatment experience.

Whether on the positive or negative side of the spectrum of human emotions, an emotional trigger is almost always masked—it is too personal. How can an algorithm predict the treatment experience that saves a life and produces lasting gratitude? How can we apply predictive analytics in the “unpredictable” realm of human emotion?

Emotional Triggers: A Data Quality Problem?

Think of emotional triggers as special “data quality” problems: shortcomings or biases in the available data that present either an incomplete or skewed picture of reality. Two specific issues emerge with respect to the quality of data on emotional triggers:

 

The emotional triggers are likely not labeled in the data

As noted in the Christensen article cited above, aggregating third-party data available on customers will not reveal when or why a customer decides to make a purchase, or what nudged them in that direction. We know that they purchased, but we do not know the cause.

The emotional triggers may not occur in the time window of interest

Sometimes the time scales for an outcome of interest (e.g., philanthropic giving) are long and have high variance. For example, a charity wants to predict whether prospects will make donations in the next twelve months. This seems straightforward enough, but what if a person decides to donate based on an event that took place more than twelve months ago? This trigger will not be available in the selected window.

 

While we may never predict with certainty (or validate) a specific trigger, there are techniques available that can increase confidence in the predicted outcome. Tried-and-true practices like maintaining strict data governance and collecting data that relates to the problem of interest can make a significant difference. Predicting the trigger may not even be necessary. If a proxy for the trigger exists in the data, then this may be enough to create a useful predictive model. Alternatively, predicting whether favorable conditions for a triggered event exist may be enough.

Case Study: Predicting Gratefulness

Our client faced several subtle issues using analytics to tackle the problem of predicting gratefulness for receiving medical treatment. Treatment experiences and outcomes are strongly individualized. While many patients see the same doctor, and have the same outcomes, those commonalities alone are not sufficient to indicate gratefulness. Also, different medical treatments have vastly different time scales for recovery. The situational dependency requires a human development officer in the loop to determine whether the time is right to approach a prospective donor, regardless of what a model predicts.

Although a patient may feel extraordinarily grateful for the care they received, only a small fraction of patients are capable of making impactful donations to a non-profit. Why not predict whether a patient will make a gift? Although this seems like a logical target for modeling, it suffers from the two problems with emotional triggers highlighted above. Depending on the type of treatment the patient received, months or years may pass before they are able to make a gift. Finally, the counterfactual for gratefulness does not exist in the data (i.e., we may never know explicitly when a person does not make a gift, and why).

Despite these issues, by carefully selecting the target variable and using solid data governance practices, the project was very successful. By aggregating data from alternate sources on charitable giving, and using information on existing relationships to the health foundation and affiliated university, our propensity model significantly improved the identification of patients who were likely to begin a relationship with our client. When tested on a population of known donors who were never flagged with our chosen target in the database, the model prioritized 75% of these donors for review by development officers, compared with a base rate of 5% in the overall population; an improvement 15 times greater than expected!

Summary

While the specific emotional trigger that inspires a person to action may remain hidden, machine learning still offers valuable predictions on human behavior such as patient gratefulness. Following data science best practices – such as maintaining good data governance and collecting data that relates to the problem of interest — can help overcome the obstacles of a hidden trigger. Rather than predicting a specific trigger, use machine learning to make predictions about the environment surrounding, or susceptibility of, a given individual to a trigger. Then, the model results will enable the domain experts (e.g., development officers) to exercise their judgment on whether it is the right time to act.

It is worth stating that care should be exercised when interpreting the outcome of any machine learning predictions of human behavior. No matter how high the likelihood, a model result does not predict that someone will do something, just that they might. Whether they will or not is still a matter of personal deliberation, passion, and preference.

Want to learn more?

Elder Research has deep experience predicting human behavior that delivers high return on our client's investment in analytics. Contact us for a consultation to explore how we can help.
Contact Us