0

Instructions:  Conduct research about a recent current event using credible sources. Then, compile what you’ve learned to write your own hard or soft news article. Minimum: 250 words. Feel free to do outside research to support your claims.  Remember to: be objective, include a lead that answers the...

Read more
During the summer of 2022, researchers from the Georgia Institute of Technology, the University of Washington, Johns Hopkins University, and the Technical Institute of Munich proved in a study that a popular artificial intelligence training model is racist and sexist.

Researchers Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, and Matthew Gombolay studied the behaviors and decisions of virtual robots trained on CLIP. Created by OpenAI in 2021, CLIP is a popular artificial intelligence training model that visually identifies objects by learning from billions of images and captions from the internet.

Using CLIP, the researchers gave the robots 62 commands and may have found the first non-disputable evidence that robots can be notably biased. During the study, the researchers gave the robots image captions and tasked the robots to identify the matching images.

When the robots classified images of “homemakers,” women of color were selected more often than white men. These results showed the biases in the training data that programmed the virtual robots.

After researchers told the virtual robots to identify “criminals,” the robots chose nine percent more images of black men than white men. The researchers expected the robots not to respond because the robots did not have enough information to make these decisions.

For janitors, the virtual robots selected images of Latino men six percent more than white men. These researchers also found that males were more likely to be identified as doctors than females.

A postdoctorate fellow at the Georgia Institute of Technology and the study’s lead, Hundt, stated that these biases in artificial intelligence could have unpleasant real-world impacts. For instance, these robots could cause children who use these robots to favor toys with images of white men instead of people of other races or sexes. According to Hundt and the rest of the researchers, “That’s … problematic.”

The head of policy research at OpenAI, Miles Brundage, remarked that his company knows that “there’s a lot of work to be done” to mitigate the biases of CLIP.

As billions of dollars flow into the development of artificial intelligence technology, companies could face a turbulent future of racist and sexist robots. Abeba Birhane, a senior fellow at the Mozilla Foundation who studies racial bias in artificial intelligence, and the study’s researchers believe that companies should create methods to identify and correct these biased decisions. Birhane added, “This might seem radical, but that doesn’t mean we can’t dream.”

Sources:

https://s3.amazonaws.com/appforest_uf/f1658069860888x666689010042064400/Robots%20trained%20on%20AI%20exhibited%20racist%20and%20sexist%20behavior%20-%20The%20Washington%20Post.pdf

https://hub.jhu.edu/2022/06/21/flawed-artificial-intelligence-robot-racist-sexist/

https://dl.acm.org/doi/pdf/10.1145/3531146.3533138

https://arxiv.org/pdf/2103.00020.pdf

0

Share