AI Programs Exhibit Racial and Gender Biases, Research Reveals

An anonymous reader quotes a report from The Guardian: An artificial intelligence tool that has revolutionized the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the specter of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. The research, published in the journal Science, focuses on a machine learning tool known as “word embedding,” which is already transforming the way computers interpret speech and text.


Share on Google+

Read more of this story at Slashdot.

Clip to Evernote

Leave a Reply

Your email address will not be published. Required fields are marked *