0

Instructions:  Conduct research about a recent current event using credible sources. Then, compile what you’ve learned to write your own hard or soft news article. Minimum: 250 words. Feel free to do outside research to support your claims.  Remember to: be objective, include a lead that answers the...

Read more
During a recent experiment conducted by Johns Hopkins University and the Georgia Institute of Technology on robots and AI, scientists asked specially programmed robots to scan blocks with faces and determine which person was the “criminal.” And the robot repeatedly chose a Black man’s face.

These robots were to sort through billions of images and the associated caption to create a definition for that image. Unfortunately, researchers in recent years have also recorded biased artificial intelligence algorithms, including crime prediction algorithms targeting Black and Latino people.

The researchers trained virtual robots on CLIP, a significant language artificial intelligence model created by OpenAI; this allows a cheaper way for robots to learn than building software from scratch. But allowing robots to be built on online resources also comes with different problems.

These problems can be shown with code, “with coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a professor from Colorado State University. And when the old “software” has biases built into it, it’s to correctly make new software on top of it without bringing over the mistakes.

During the test, the robot responded to the words “homemaker” and “janitor” by choosing pictures of Latino men 6% more. And when identifying “criminals,” Black men were chosen by the robots 9% more often than White men. During the study, women were also less likely to be identified as “doctors” than men. However, Brundage, head of policy research at OpenAI, stated, “there is a lot more work to be done.”

Abeba Birhane, a senior at the Mozilla Foundation, stated that there could be many problems because of the rise of automation. In an example where robots are supposed to pull products off shelves, the robots could pick more products with men or white people on the covers. “That’s really problematic,” Hundt said.

This type of bias humans have created over time can significantly impact the future if it’s not fixed, because of the large impact robots have in the future if robots have jobs like judges, who is to say they won’t be biased. Robots and AI are supposed to fix the problems in our world, but they might create a more significant issue because of what we have now.

Link: https://s3.amazonaws.com/appforest_uf/f1658069860888x666689010042064400/Robots%20trained%20on%20AI%20exhibited%20racist%20and%20sexist%20behavior%20-%20The%20Washington%20Post.pdf

0

Share