Beyond Transparency and Accountability: Three Additional Features Algorithm Designers Should Build into Intelligent Platforms
Academic Article
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
In the age of artificial intelligence, innovative businesses are eager to deploy intelligent platforms to detect and recognize patterns, predict customer choices and shape user preferences. Yet such deployment has brought along the widely documented problems of automated systems, including coding errors, corrupt data, algorithmic biases, accountability deficits and dehumanizing tendencies. In response to these problems, policymakers, commentators and consumer advocates have increasingly called on businesses seeking to ride the artificial intelligence wave to build transparency and accountability into algorithmic designs. While acknowledging these calls for action and appreciating the benefits and urgency of building transparency and accountability into algorithmic designs, this article highlights the complications the growing use of artificial intelligence and intelligent platforms has brought to this area. Commissioned for the 2020 Northeastern University Law Review Symposium entitled "Eyes on Me: Innovation and Technology in Contemporary Times," this article argues that owners of intelligent platforms should pay greater attention to three Is: inclusivity, intervenability and interoperability. This article begins with a brief background on the black box designs that have now dominated intelligent platforms. It then explains why the I in AI has greatly complicated the ongoing efforts to build transparency and accountability into algorithmic designs. The article further identifies three additional Is that owners of intelligent platforms should build into these designs: inclusivity, intervenability and interoperability. These in-built design features will achieve win-win outcomes that help innovative businesses to be both socially responsible and commercially successful.