The New Republic: Turns Out Algorithms Are Racist
Turns Out Algorithms Are Racist
In an August 2017 report on Artificial Intelligence (AI), it is suggested that “artificial intelligence may be just as bigoted as human beings.”
The report by The New Republic digs deeper into the bias of machines in everything from credit approvals to social justice and the future doesn’t look to bright…
It turns out that artificial intelligence may be just as bigoted as human beings. Last fall, it was discovered that a complex program used in image recognition software was producing sexist results—associating cleaning or the kitchen with women, for example and sports with men. The developers were disturbed, but perhaps it shouldn’t have been so surprising. After all, Computers and software, even at their most sophisticated, are still in essence input-output systems. AI is “taught” by feeding it enormous amounts of pre-existing data—in this case, thousands upon thousands of photos. If it began to associate certain genders with certain activities, it is because it was outputting the bias inherent in its source material—that is, a world in which pictures of people in kitchens are too often women.
The problem is a significant one. While in the abstract the term artificial intelligence can conjure sci-fi ideas of fully autonomous robots with personalities, for the present, AI mostly refers to complex software used to make decisions or carry out tasks—everything from determining credit approvals to predicting shopping habits to self-driving cars—software that’s becoming more and more omnipresent in our daily lives. As we outsource and automate things like decision-making, customer service, and physical or mental tasks to software, there are profound ramifications for employment, government, regulation—and social justice, too. We are situated at the threshold of determining whether a new era of technology will replicate the injustices of the past, or if it may in fact be used to challenge the inequalities of the present.
Once we understand artificial intelligence as primarily a decision-making system, it becomes easier, and also surprising, to see the extent to which it has already penetrated our lives—often with a worrying lack of transparency. Amazon, for example, is already using AI in hundreds of ways: to determine consumer preferences, to suggest products to buyers, to organize its warehouse and distribution, and of course in their Alexa voice assistant products like the Amazon Echo. But the company also uses AI to push customers to higher-priced products that come from preferred partners. These kinds of examples are becoming more common, and more serious. A ProPublica investigation revealed that justice systems were using AI to predict that chance of reoffending, and incorrectly marked black defendants as more likely to befuture criminals. AI is also being used to determine which prisons a convict should have to go to or, as The Atlantic revealed, what visitation rights she or he might have. AI and machine learning were in part responsible for the fake news that may have influenced the 2016 election. AI is also used to determine credit eligibility or offers for other financial products, and often does so in discriminatory ways—the programs may offer you a higher interest rate if you are black or Latino, for instance, than if you are Asian or white.
Recent Comments