Balance: Accuracy vs. Interpretability in Regulated Environments

Author:

Will Goodrum

Date Published:
November 4, 2016

At Elder Research, we are a “whiteboard on every wall” kind of office. Inspiration happens spontaneously, and any conversation, however casual, can easily drift into an involved discussion that uncovers the hidden route to move a project forward. Recently one of these whiteboard discussions took place between our Principal Data Scientist, Mike Thurber, and Founder, Dr. John Elder IV. The conversation between Mike and John started innocently enough over lunch, but then dragged out over the next several hours. Although the tone never rose above a murmur, this was clearly a vibrant discussion. What was the subject of this whiteboard-fueled philosophizing? It was another chapter in the familiar balancing act between accuracy and interpretability in predictive modeling.

The tension between interpretability and accuracy is a fundamental trade-off when using Machine Learning algorithms for prediction. Simpler predictive algorithms, which are more easily interpreted by humans, tend to be less accurate than more advanced methods that are not as easily explained. (Note, by accuracy, I do not mean how well a model does in training, but the far more important measure of how well it does in evaluation, generalizing to new cases.) Interpretability tends to be a subjective model assessment. It boils down to how easily can the model be explained to a human being? By contrast model accuracy is any quantitative metric that explains how well the model predicts the outcome of interest. For example, for all known outcomes, how many did the model predict correctly?

For being such a quantitative field, the subjective importance of interpretability in predictive analytics remains a divisive topic for data scientists. As more and more companies investigate the use of advanced analytics to drive future growth, this tension becomes increasingly relevant to analytic success. How do we as data science consultants build confidence in our methods if those methods are not well-understood by our clients? Further, how do we continue to justify the value proposition of analytics if we must handicap our models to make them interpretable?

Nowhere is this more pertinent than in heavily-regulated industries (e.g., banking, insurance, and healthcare), where understanding the information used by an analytical model is not just a preference, it is often a matter of legal compliance. The costlier the impact of an error, a breach in privacy, or possibility for discrimination, the greater the demand for an interpretable (and accurate) solution.

The Case for Accuracy

It is easy to build a case in favor of weighting model accuracy over interpretability. Accuracy can be objectively and easily scored, allowing alternate methods to be compared readily with one another. If a predictive model performs better in terms of the selected score, then that should weigh heavily in its favor. When the cost of an error is high, the incentive for higher performance is consequently greater. Additionally, even so-called “interpretable” models like decision trees or linear regression can rapidly become non-interpretable. For example, after the first couple of levels of a decision tree, it becomes difficult for humans to parse the reasons for selecting on one variable versus another.

Even passionate advocates for accuracy still acknowledge that interpretability has its place in model selection. “Interpretability is good if it kills a model. Interpretability is bad if it confirms a model,” said Dr. Elder in a recent discussion. His rationale is based on hard-won experience: if it is possible for a human to understand why a model is wrong, then that is a good application of interpretability. However, he believes humans are far too given to confirmation bias[1] based on prior experience to be able to objectively interpret a model’s results. We are desperate for explanation, and can easily make spurious arguments based on the results we are presented from the data (or even with no data at all).

The relative importance of accuracy ratchets up dramatically as the cost of having an error increases. For example, IBM’s Watson technology is increasingly being used to support medical diagnoses. Recently, Watson successfully diagnosed[2] a rare form of leukemia that oncologists misdiagnosed. This is a fantastic example of how accurate and vetted Machine Learning algorithms can augment existing workflows. A disease was found, and a life was saved.

But imagine for a second that Watson was wrong. A patient would have been set on an expensive and time-consuming course of treatment that ultimately is of little to no benefit (and is potentially harmful). Watson, like all other Machine Learning algorithms, only “knows” information that it has been given; it cannot know about cases or alternate hypotheses external to the data on which it was trained. The healthcare environment is one in which accuracy is paramount, but that accuracy is only as good as the data on which the model was trained. When Watson (or a Watson-like system) makes a diagnosis, it is essential that the logic behind the diagnosis is evident, as well. The cost of an error is too great to be left to a black box answer. In the very different world of financial services, a mortgage loan or insurance claim cannot be denied on the basis of race. If race has somehow been included in a predictive model, this would violate regulations that forbid discriminatory practices. This begins to swing the needle back from accuracy toward interpretability.

Simplify, Then Add Interpretability

How much interpretability is required depends strongly on the consumer of the model results. If the model result is just a score to be passed to an automated process, then very little may be necessary. When a human (e.g., a doctor, nurse) is the end-user of the model result, and that result demands action, more interpretability should be added to help with understanding why action must be taken. Further, in regulated environments, it must always be possible to explain what factors were significant in calculating a given model score, regardless of the end-user. Since model accuracy is often paramount in highly-regulated environments, what must be given up to reach that level of interpretability?

A surprising parallel of the tension between interpretability and accuracy exists in the highly-regulated world of motorsports. The fundamental goal of motorsports is straightforward: be the fastest around the track. Speed requires performance, and that performance can be realized through another tradeoff: power versus weight. A car can be made faster by giving it a higher horsepower engine. It likewise may be made faster by making it lighter. These two things are typically at odds with one another. More powerful engines typically are bigger and heavier, affecting performance around turns. Reducing weight improves performance everywhere around the track, but is often viewed negatively because it is harder to achieve the necessary compromise (i.e., what must be left out?).

However, the emphasis on weight reduction need not be so loss driven. Colin Chapman, the founder of Lotus Motorsports, coined an ingenious inversion of the problem: “Simplify, then add lightness.” Although it may seem strange, this concept of “adding lightness” completely reframes the inherent tension. Now, it is less about compromising (“what do we leave out?”) and more about innovation and creativity (“how can we do this?”).

Model interpretability can be similarly reframed. Instead of focusing on the loss of accuracy in the name of interpretability necessitated in regulated or high-cost environments, what if analytical models were instead built with an emphasis on “adding interpretability” where needed? There are techniques available[3], such as surrogate models, small ensembles, variable importance measures, or permutation tests that can augment the interpretability of more sophisticated algorithms. By adding interpretability, these more accurate and advanced algorithms (when required) can be employed to improve model performance on critical decisions. Using the added interpretability paradigm also short-circuits the temptation to select models based on interpretability alone. Instead, models can be created and scored as usual, with interpretability added as needed to explain (not vindicate) the final choice.

Keep in mind, an emphasis on interpretability is only necessary when the increased performance of a more sophisticated algorithm is absolutely required. That requirement depends heavily on the real cost of the error of using a simpler algorithm. Sometimes, it is not very much[4,5]. If a simpler algorithm produces results that are sufficiently accurate, with satisfactory interpretability, then the job is done. In general, start simply, and then increase sophistication and add interpretability as needed.

Summary

By reframing the trade-off between predictive model interpretability and accuracy into one in which interpretability is added to models, the tension becomes less about what is sacrificed and more about applying innovation and creativity to meet the often contrasting goals of achieving high performance and comprehension in highly-regulated environments. Adding interpretability, when required, frees the data scientist to apply the entire arsenal of Machine Learning algorithms at his or her disposal, and explain the scoring after the fact. It also short circuits the temptation to select models based on interpretability. Instead, models are able to vindicate themselves on performance, not on subjective human interpretation. Even if the debate surrounding interpretability and accuracy is never permanently resolved, at least there is something new to talk about in those long whiteboard discussions.

How can this be applied to you?

Speak with an experienced data analytics consultant to discuss how Elder Research can transform your data into actionable insight.
Request a Consultation

[1] C. Graves, “Why Debunking Myths About Vaccines Hasn’t Convinced Dubious Parents,” Harvard Business Review (Online), February 20, 2015.

[2] J. Fingas, “IBM’s Watson AI saved a woman from leukemia,” Engadget, August 7, 2016.

[3] P. Hall, “Predictive modeling: Striking a balance between interpretability and accuracy,” O’Reilly: On Our Radar Blog, February 11, 2016.

[4] U. Johansson et al., “Trade-off between interpretability and accuracy for predictive in silico modeling,” Future Medicinal Chemistry, April 2011, Vol. 3(6), pg. 647-663 Accessed via PubMed, April 3, 2011.

[5] N. Phillips, “Making fast, good decisions with the FFTrees R package,” nathanieldphillips.com, August 17, 2016.