Artificial
Intelligence has always been known for its biases and this time we found how it
shamelessly whitewashes a person of color. For instance, if you put a pixelated
image of Obama, a black man, into an algorithm that turns it into a
high-resolution picture, you will get a white man.
Source:
Twitter
This discrimination is not just against Obama. If you use the same software for congresswoman Alexandria Ocasio-Cortez or Lucy Liu, the resulting images are of white women. A Twitter user pointed out the AI bias saying, “This image speaks volumes about the dangers of bias in AI.”
So what does this indicate about AI?
The
program used to generate such images is called PULSE, a technique that is
recognized as upscaling to process visual data. So this software which was developed
by NVIDIA's researchers is responsible for all the real human images you
see on internet sites like ThisPersonDoesNotExist.com.
Source:
Twitter
The technology became popular but critics pointed to the fact that whenever we generate high-resolution pictures of depixelated images the result is almost always a Caucasian face. “It does appear that PULSE is producing white faces much more frequently than faces of people of color,” tells the algorithm’s creators on Github. “This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of.”
Source:
PULSE
This issue is common when it comes to machine learning but it can only occur
when you have trained the AI system on white faces. The recent image of Obama
has started a debate among AI academics, researchers and engineers especially
in light of the recent BLM movement.
The faces were generated by using “the same concept and the same StyleGAN model” but various search methods to Pulse, stated Klingemann, who says it is not right to judge an algorithm from just a few samples. “There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally ‘correct,’” he tells The Verge
Deborah Raji, another researcher in AI accountability, told The Verge that this kind of bias is all too typical in the AI world. “Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems,” says Raji. “People of color are not outliers. We’re not ‘edge cases’ authors can just forget.”
Source:
Twitter
So obviously, AI databases need to do better. One researcher Vidushi Marda, responded on Twitter to the white faces produced by the algorithm and summed up the argument quite accurately, “In case it needed to be said explicitly - This isn’t a call for ‘diversity’ in datasets or ‘improved accuracy’ in performance - it’s a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.”