Making Data Science Responsible:

Regulation? Vigilantism?

Author:

Peter Bruce

Date Published:
November 29, 2021

Nearly every day there is a news story reminding us that, for all the benefits of machine learning – benefits that sometimes seem magical – there is a dark side as well. One example that received a lot of attention was the COMPAS algorithm, used in the United States to predict defendants’ propensity to commit further crimes. This algorithm provides a very modest improvement in accuracy over the naive model of simply saying that every defendant will commit another crime (“recidivate”), but it performs very differently for African-Americans (they are over-predicted to recidivate) than for Whites (they are under-predicted to recidivate).

There are many other examples:

  • Social media algorithms to promote user engagement also fostered the rapid spread of toxic content, such as TikTok videos showing ISIS fighters dragging corpses through a town, or child abduction rumors on WhatsAp, that led to lynchings.
  • The Optum algorithm used to predict individuals’ need for medical care systematically under-predicted that need for African Americans.
  • The Narxcare algorithm used to predict propensity for opioid addiction led to denial of pain medication in legitimate cases.

A common theme is that, in many cases, the harm is unintentional.  The developers of the COMPASS and Optum algorithms did not set out to discriminate against African-Americans.  Appriss Health, which developed Narxcare, did not intend to foster denial of pain medication to those truly needing it.

There are, of course, other cases where harm is intentional.  How about the use of facial recognition algorithms coupled with entity resolution techniques to monitor and suppress ethnic minorities in Xinjiang province in China?  Most readers would characterize that as intentionally inflicting harm. Then there are “gray areas,” where the owner of an algorithm becomes aware of its harm but, fearing loss of revenue, does nothing.

Leaked Facebook documents recently revealed that the company failed to act on its own research showing that ML-enabled algorithms harmed teenagers (for teen girls with mental health and body-image issues, Instagram exacerbated those issues).

How to solve problems like this?

One Solution: Regulation

Some say the response should be more laws and regulation.  The European Union went down this road when it promulgated the General Data Protection Regulation (GDPR) in 2016, replacing an older regulation which had become outmoded.  However, while the 82-page GDPR has successfully launched a compliance cottage industry, it isn’t much help with issues of bias and unfairness.  It is focused primarily on disclosure, privacy and consumer control of data, things most internet users seem to care little about: 95% of them simply breeze past the boilerplate Terms of Service that enshrine GDPR and other legal compliance rules, without reading them. Undaunted, the EU is now working on a third regulatory attempt, the proposed Artificial Intelligence Act, that will supposedly address issues of bias, behavioral manipulation, surveillance, and the like.

Perhaps it will have better results than its two predecessors, but technological innovation has a way of simply working its way around fixed legal obstacles, in much the same way that water running downstream will quickly outflank rocks placed in its path.

 

Another Solution:  Machine Learning to the Rescue

A remedy that has been proposed in some cases is to have machine learning police itself.  This is most relevant for social media platforms that recognize the need to curb toxic content.  Facebook has tried to use machine learning algorithms to identify and remove, or at least throttle, hate speech.  Despite optimistic public statements over the last couple of years, leaked internal documents (noted above) suggest the attempt is largely failing.  A Wall Street Journal story based on those documents concluded that Facebook’s machine learning policing efforts ended up removing less than 5% of the hate speech it was going after.

Would more data and more time allow the algorithms to better distinguish hate speech from legitimate speech?  Unfortunately, the target is moving.  Tomorrow’s hate speech and toxic content will not necessarily be like today’s, and algorithms are not likely to have enough data and time to keep current.

A Third Solution:  Protection by Vigilante Justice

At a different end of the scale from regulation lies vigilantism, or, to put it in softer terms, community self-policing.  Algorithmic bias has become a high profile topic, with articles about the latest outrage appearing almost daily. To prevent reputational harm, companies that have been called out will want to take action to eliminate or at least mitigate bias in their algorithms. But this has not proved to be the case in some high profile cases: Northpoint is still marketing COMPAS and Optum is still selling its healthcare algorithm.  Facebook has taken actions to protect its platform from toxic content, but the leak of internal documents shows that the company is unlikely to sacrifice much revenue to this end.

One group, though, is betting that reputational harm will be a potent motivator: The Algorithmic Justice League (AJL).  The League was founded by Joy Buolamwini, who has written extensively about the struggles that facial recognition algorithms have with darker faces.  Well-placed to generate publicity, the League is on the lookout for cases of potential machine learning bias that might harm corporate reputations.  Conveniently, the League offers the services of its partner, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), to conduct audits of machine learning algorithms and fix those algorithms.  Founded by Cathy O’Neil, author of Weapons of Math Destruction, ORCAA offers its audits for a fee.  The AJL-ORCAA combination has already landed a fruitful engagement with a major consumer goods company.  The engagement seems to be embraced enthusiastically by all parties.  Still, appearances matter.  Profiting by fixing problems that your partner’s publicity machine highlights starts to look a little like a protection racket.

What the Data Scientist Can Do

Notwithstanding what governments, other algorithms and vigilantes can or can’t do, there is a lot the data science practitioner or manager can do to avoid unintentional reputational or ethical difficulties.  Practicing “responsible data science” means following a framework that is really an enhanced version of existing technical “best practices” rubrics, such as CRISP-DM. Grant Fleming and I present such a framework in our book Responsible Data Science; others have presented similar frameworks.

Most such frameworks start with some version of the following set of principles:
  • Non-maleficence (avoiding harm)
  • Fairness and Bias
  • Transparency (including the important issue of model interpretability)
  • Accountability
  • Privacy
From a process standpoint, the project goes through phases like the following:

Understand and justify the project

and anticipate ill use
step image

Assemble what’s needed

agreed goals and requirements, modeling tools, data, and Datasheets, which include data dictionaries and explanations of how the data was gathered, pre-processed and intended to be used
step image

Prepare the data

check for any distortion introduced during in data wrangling
step image

Fit and assess models

step image

Audit the model and incorporate feedback

check model predictions for sensitive or protected groups; review and tweak the model if needed
step image

Non-maleficence, for example, sounds unimpeachable at first, but, on reflection, you realize that most algorithmic decisions cause some harm at an individual level.  When a model recommends against a loan for an individual, that individual is harmed.  So “avoiding harm” may need a larger context and/or a more limited definition.

Fairness is also difficult to pin down.  What is fair for one person might be unfair for another. One person might consider it fair for everyone to pay the same absolute amount in taxes.  Another person might think everyone should pay the same share of their income in taxes. Another might think the wealthy should pay a higher share of their income in taxes. Yet another might think that the wealthy should pay all the taxes. There is no universally-agreed definition, in this case, of what constitutes fairness.  Likewise, to some, biased decisions are any decisions that result in different outcomes for different groups.  To others, bias might have a more subtle meaning:  for example, a model that is more accurate for some groups than others.

Summary

The RDS framework does not, by itself, provide clarity on all issues that require human judgment.  It does provide a checklist for what to consider, and an extension of existing technical “best practices” to cover broader ethical issues.

Explore more ethical issues in data science with

Responsible Data Science