One Solution: Regulation
Some say the response should be more laws and regulation. The European Union went down this road when it promulgated the General Data Protection Regulation (GDPR) in 2016, replacing an older regulation which had become outmoded. However, while the 82-page GDPR has successfully launched a compliance cottage industry, it isn’t much help with issues of bias and unfairness. It is focused primarily on disclosure, privacy and consumer control of data, things most internet users seem to care little about: 95% of them simply breeze past the boilerplate Terms of Service that enshrine GDPR and other legal compliance rules, without reading them. Undaunted, the EU is now working on a third regulatory attempt, the proposed Artificial Intelligence Act, that will supposedly address issues of bias, behavioral manipulation, surveillance, and the like.
Perhaps it will have better results than its two predecessors, but technological innovation has a way of simply working its way around fixed legal obstacles, in much the same way that water running downstream will quickly outflank rocks placed in its path.
Another Solution: Machine Learning to the Rescue
A remedy that has been proposed in some cases is to have machine learning police itself. This is most relevant for social media platforms that recognize the need to curb toxic content. Facebook has tried to use machine learning algorithms to identify and remove, or at least throttle, hate speech. Despite optimistic public statements over the last couple of years, leaked internal documents (noted above) suggest the attempt is largely failing. A Wall Street Journal story based on those documents concluded that Facebook’s machine learning policing efforts ended up removing less than 5% of the hate speech it was going after.
Would more data and more time allow the algorithms to better distinguish hate speech from legitimate speech? Unfortunately, the target is moving. Tomorrow’s hate speech and toxic content will not necessarily be like today’s, and algorithms are not likely to have enough data and time to keep current.
A Third Solution: Protection by Vigilante Justice
At a different end of the scale from regulation lies vigilantism, or, to put it in softer terms, community self-policing. Algorithmic bias has become a high profile topic, with articles about the latest outrage appearing almost daily. To prevent reputational harm, companies that have been called out will want to take action to eliminate or at least mitigate bias in their algorithms. But this has not proved to be the case in some high profile cases: Northpoint is still marketing COMPAS and Optum is still selling its healthcare algorithm. Facebook has taken actions to protect its platform from toxic content, but the leak of internal documents shows that the company is unlikely to sacrifice much revenue to this end.
One group, though, is betting that reputational harm will be a potent motivator: The Algorithmic Justice League (AJL). The League was founded by Joy Buolamwini, who has written extensively about the struggles that facial recognition algorithms have with darker faces. Well-placed to generate publicity, the League is on the lookout for cases of potential machine learning bias that might harm corporate reputations. Conveniently, the League offers the services of its partner, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), to conduct audits of machine learning algorithms and fix those algorithms. Founded by Cathy O’Neil, author of Weapons of Math Destruction, ORCAA offers its audits for a fee. The AJL-ORCAA combination has already landed a fruitful engagement with a major consumer goods company. The engagement seems to be embraced enthusiastically by all parties. Still, appearances matter. Profiting by fixing problems that your partner’s publicity machine highlights starts to look a little like a protection racket.