Nowadays, when thinking of Artificial Intelligence (AI), most of us are first thinking of killer robots and the loss of dozen millions of jobs.
While it is still critical to consider how AI will impact the job market as well as people’s expertise, we also need to think about an even more important and currently active threat : the algorithm bias.
When developing a digital technology, a bias can be introduced in the algorithm that helps an application run on your cell phone for instance. This bias can come from the explicit criteria of the algorithm or, the data on which the algorithm is calibrated or trained in the case of machine learning. As a result, some technology discrimination appears and result from excluding part of the users, who are not accurately represented in the algorithm criteria.
Some well-known past examples of technology discrimination can be underlined such as the first facial recognition algorithms that were not recognizing dark skin tone faces. Indeed, the conceptors may not have thought of dark skin tone when developing this technology, and/or the data on which the algorithm was trained might have had only photos of white tone faces.
In this article we explain what a technology discrimination is, as well as what a bias is and its origins. In addition we propose some key solutions to investigate toward avoiding the introduction of biases in the technologies we build.