Building Models Experts Will Trust

Author:

Will Goodrum

Date Published:
June 23, 2017

There are two problems with humans making decisions from data. We are biased— even experts are just as likely to give inconsistent judgments—and we don’t always understand, or trust, the model. Although decision-makers could benefit from using data as a part of their decision making, raw machine learning results may not be meaningful enough.  So how can we use data in a way that experts trust without diluting the machine learning process?

Colin Chapman, the founder of Lotus F1 race car, had great insight into the performance of his cars. He knew that adding horsepower made a car faster on the straightaway, but didn’t help on the curves. So, he reframed the mantra to “simplify and add lightness”. By making his cars lighter they were faster everywhere on the track. By reframing he focused on what it is essential to leave in, instead of what you to leave out. Applying that framing device to analytics, there are several approaches to “simplify and add Interpretability”.  Some examples are:

  • Select, Regress, and Round
  • LIME (Locally Interpretable Model-Agnostic Explanations)
  • Bayesian Rule Lists
  • Optimal Action Extractions
  • Dark Knowledge

Each of these methods increase model transparency; that is, they make the decisions less opaque and in some contexts, they make the contributions of each model component clearer.

Calculating Decision Rules Using Simplified Regression

Select, Regress, and Round was recently published in the Harvard Business Review. This framework combines the analytics of regression with the familiarity and applicability of decision rules. Through simplification, Select, Regress and Round, enables decision makers to gain some of the benefit of advanced analytics, with the added interpretability of knowing why the model recommends a certain decision.

In Elder Research’s Ten Level of Analytics framework (by Fast and Elder), greater business value is often achieved by applying higher levels of analytics to the business problem. The Select, Regress, and Round process can be used, for example, to advance a project from a level 4 (Business Rules) to a level 8 (Structure learning).

Starting with Select and Regress, the project starts as it normally would with stepwise variable selection, feature engineering — anything it takes to get the target outcome.  Applying cross-validation and target shuffling ensures that the results are real, resulting in a high-performing regression model based on metrics that are important to the selected problem.  The special sauce comes through Rescaling and Rounding. First, rescale (i.e., normalize) the coefficients onto a range space (-10 to 10 for the example in the white paper), normalize them by the largest coefficient value, and then round to an integer value. Instead of having real-value coefficients, they are normalized and then scaled so that all coefficients lie on the same range.

You are still relying on lasso regression to yield good coefficients. Because of the resulting regularization some model features are eliminated; then rescaling and rounding knocks out some additional features. The key is asking, what are the most important features that correlate to the desired outcome?  Having identified them using machine learning, build rubrics around them that a human can understand.  The benefit of Rescale and Round is you preserve some of the performance lift, but the process is more transparent.  Use Rescale and Round for a deployment environment where mental arithmetic is easier than remembering what a score means.

The Power of the Process

To show the power of this process the HBR article cites an example where Jung et al. created a model applied by judges to determine a defendant’s flight risk. The model analyzed over 100,000 judicial pretrial release decisions for a large city. According to Jung “following our rule would allow judges in this  jurisdiction to detain half as many defendants without appreciably increasing the number who fail to appear at court. How is that possible? Unaided judicial decisions are only weakly related to a defendant’s objective level of flight risk. Further, judges apply idiosyncratic standards, with some releasing 90% of defendants and others releasing only 50%. As a result, many high-risk defendants are released and many low-risk defendants are detained. Following our rubric would ensure defendants are treated equally, with only the highest-risk defendants detained, simultaneously improving the efficiency and equity of decisions.” Jung continues, “Decision rules of this sort are fast, in that decisions can be made quickly, without a computer; frugal, in that they require only limited information to reach a decision; and clear, in that they expose the grounds on which decisions are made.”

Unaided by data, the judges examined were more likely to assign a higher flight risk to African Americans defendants than Caucasian defendants, even if their backgrounds were similar.  So, another advantage of using such a model is that its simple rules can remove from practice even unconscious bias being exhibited by experts if they will implement it.

The Select, Regress, and Round procedure is a useful way to simplify and add interpretability to your models when needed.

Request a consultation to speak with an experienced data analytics consultant.

Contact Us