NEXT: How will we mediate between computer and human generated decisions?
The European Union has proposed a regulation that is being called a “right to explanation.”
Neural networks employing deep learning algorithms have become particularly good at reading enormous data sets, identifying patterns in those data, and making accurate predictions and decisions based on those procedures. What is interesting, and a little chilling, is that while computer programmers have created the rules and protocols under which the algorithms operate, those programmers are uncertain as to exactly why the algorithms are so accurate. Indeed, there seems to be an inverse relationship between the predictive accuracy of these machine learning algorithms and our ability to explain or understand why they work.
It is almost as if machine learning algorithms are exhibiting features we have associated with the human mind. By this, I mean that although we learn and understand more and more about its operations, the brain remains a mystery to us. One of the big questions remains, “Where does consciousness come from?” Although they understand a great deal about the mechanics of the brain, this is a question for which neurologists and cognitive scientists still have no answer.
Machine learning algorithms are increasingly taking on this kind of black box-like quality, which has troubling implications. If those algorithms are making decisions on our behalf, shouldn’t we have some sort of understanding first as to how they are arriving at their decisions? That is the point of the proposed EU regulation: that any human who is the subject of any decision made by an algorithm would have the right to peer into the black box and understand the reasoning behind the algorithm’s decision.
The full name is the General Data Protection Regulation (GDPR). Article 11 of the regulation says this about “automated individual decision making:”
Member States shall provide for a decision based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her, to be prohibited unless authorised by Union or Member State law to which the controller is subject and which provides appropriate safeguards for the rights and freedoms of the data subject, at least the right to obtain human intervention on the part of the controller…Profiling that results in discrimination against natural persons on the basis of special categories of personal data referred to in Article 10 shall be prohibited, in accordance with Union law.
In other words, the regulation would prohibit any automated decision that “significantly affects” EU citizens. This includes techniques that evaluate a person’s “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” If an algorithm makes a decision, say, to not hire you for a job, then, in my understanding of this regulation, you’d be within your rights to demand to understand why the algorithm made the decision that it did. Often without our knowledge, algorithms are making all kinds of decisions for us already, even trivial ones like an Amazon recommendation: “Based on your preferences, you might like these titles.” But some decisions, like whether to hire you or not, or whether to extend insurance benefits or not, are of greater consequence. The implications, of course, are that a lot of the innovation that has occurred could be gummed up in courts, that is, if the EU follows through and if other countries, like the US, invoke similar regulations.
As algorithms make more decisions that impact our lives, those decisions might be at odds with our interests. In such a situation, whose interests will win out? If politics is the negotiation of divergent interests, we might be witnessing a new political fault line. How will we mediate between computer- and human-generated decisions? A new “algorithmic politics” may be the result.
Forty years ago, the computer scientist Joseph Weizenbaum wrote Computer Power and Human Reason. His contention was that, as computers were programmed to make more and more decisions, they should never be allowed to supplant human judgment: that there were some decisions computers should not be permitted to make. We are fast approaching the time where Weizenbaum’s warnings need to be heeded.
The next Columbus Futurists monthly forum will be Thursday January 19 at 6:30 PM at the Panera Bread community room (875 Betel Rd.) Our topic for the evening will be “Gene editing and the future of CRISPR.”