How to deal with bias in AI
Bias is an inevitable feature of life. But social bias can be projected and amplified in dangerous ways by artificial intelligence, whether it is deciding who gets a bank loan or who should be watched over. When I read the article about “Artificial Intelligence’s Bias” in New York Times magazine, I would like to share with you two different views – Daphne and Olga – on the subject.
You could mean bias in the sense of racial bias, gender bias. For example, you do a search for C.E.O. on Google Images, and up come 50 images of white males and one image of C.E.O. Barbie. That’s one aspect of bias.
Another notion of bias, one that is highly relevant to my work, are cases in which an algorithm is latching onto something that is meaningless and could potentially give you very poor results. For example, imagine that you’re trying to predict fractures from X-ray images in data from multiple hospitals. If you’re not careful, the algorithm will learn to recognize which hospital generated the image. Some X-ray machines have different characteristics in the image they produce than other machines, and some hospitals have a much larger percentage of fractures than others. And so, you could actually learn to predict fractures pretty well on the data set that you were given simply by recognizing which hospital did the scan, without actually ever looking at the bone. The algorithm is doing something that appears to be good but is actually doing it for the wrong reasons. The causes are the same in the sense that these are all about how the algorithm latches onto things that it shouldn’t latch onto in making its prediction.
To recognize and address these situations, you have to make sure that you test the algorithm in a regime that is similar to how it will be used in the real world. So, if your machine-learning algorithm is one that is trained on the data from a given set of hospitals, and you will only use it in those same set of hospitals, then latching onto which hospital did the scan could well be a reasonable approach. It’s effectively letting the algorithm incorporate prior knowledge about the patient population in different hospitals. The problem really arises if you’re going to use that algorithm in the context of another hospital that wasn’t in your data set to begin with. Then, you’re asking the algorithm to use these biases that it learned on the hospitals that it trained on, on a hospital where the biases might be completely wrong.
Over all, there’s not nearly as much sophistication as there needs to be out there for the level of rigor that we need in terms of the application of data science to real-world data, and especially biomedical data.
I believe there are three root causes of bias in artiﬁcial intelligence systems. The ﬁrst one is bias in the data. People are starting to research methods to spot and mitigate bias in data. For categories like race and gender, the solution is to sample better such that you get a better representation in the data sets. But, you can have a balanced representation and still send very different messages. For example, women programmers are frequently depicted sitting next to a man in front of the computer, or with a man watching over her shoulder.
I think of bias very broadly. Certainly gender and race and age are the easiest to study, but there are all sorts of angles. Our world is not fair. There’s no balanced representation of the world and so data will always have a lot of some categories and relatively little of others.
Going further, the second root cause of bias is in the algorithms themselves. Algorithms can amplify the bias in the data, so you have to be thoughtful about how you actually build these systems.
This brings me to the third cause: human bias. A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities. We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues. There are a lot of opportunities to diversify this pool, and as diversity grows, the A.I. systems themselves will become less biased. Let me give one example illustrating all three sources. The ImageNet data set was curated in 2009 for object recognition, containing more than 14 million images. There are several things we are doing with an eye toward rebalancing this data set to better reﬂect the world at large. So far, we went through 2,200 categories to remove those that may be considered offensive. We’re working on designing an interface to let the community ﬂag additional categories or images as offensive, allowing everyone to have a voice in this system. We are also working to understand the impact that such changes would have on the downstream computer vision models and algorithms.
I don’t think it’s possible to have an unbiased human, so I don’t see how we can build an unbiased A.I. system. But we can certainly do a lot better than we’re doing.
Think about how we handle the bias issue in our system, if you are serious about AI, or about ethical AI of course… I’m sure you faced with same issues or thoughts during your projects, let’s share and discuss…