0

Instructions:  Conduct research about a recent current event using credible sources. Then, compile what you’ve learned to write your own hard or soft news article. Minimum: 250 words. Feel free to do outside research to support your claims.  Remember to: be objective, include a lead that answers the...

Read more
Humans are imperfect with inherent and irrational biases toward certain groups of people. To many, robots should be a clean slate with no prejudice towards anyone. Unfortunately, this is not the case.

Scientists from John Hopkins University, the Georgia Institute of Technology, and the University of Washington, conducted a recent experiment where they programmed advanced robots to scan blocks with human faces on them and sort them into boxes. Out of 62 commands the robot was given, some were instructed to identify blocks as “homemakers,” and “criminals.”

As if we were to expect better, the machines let toxic stereotypes influence their answer.

When asked to identify blocks as “homemakers,” Black and Latina women were selected over White men. When identifying “criminals,” Black men were chosen 9% more than White men. For “janitors,” blocks with Latino men were selected 6% more than White men. Additionally, when identifying blocks as “doctors,” women were less likely to be picked than men.

In one scenario by Andrew Hundt, a postdoctoral researcher at Georgia Tech who worked on this experiment, robots could be asked to pull products off shelves such as books, children’s toys and food packaging. However, many of these items usually have human images on them, and if programmed to pick these products, they may be more likely to pick products that show a white man on it.

Is this just a coincidence? I think not.

“We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues,” said Andrew Hundt.

But why did these robots respond in this manner? This is because of the very people who programmed it: us.

For centuries, we have learned assumptions about race, gender, and social differences. Every single aspect of our history has shaped our perspectives of our surroundings today. We learned that “men” were once the definition of “human beings,” and “women” were only a side character. We learned about the discrimination towards those of color, specifically Black people. We learned that there are only two pronouns, “he/she,” and nothing more than that. Our world has learned to be racist and sexist towards one another. Now, we are reflecting our destructive traits onto robots.

In order to fully develop these machines not to respond in flawed ways, we must first change our moral standards and our behaviors. If we are ever able to achieve this, it will not only benefit the present, but also the future generations of our society.

Link to articles:

https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/

https://futurism.com/scientist-paper-robot-racist

https://www.businessinsider.in/tech/news/racist-and-sexist-ai-robots-adhered-to-harmful-stereotypes-when-sorting-photos-of-people-researchers-say-the-tech-is-unsafe-for-marginalized-groups/articleshow/92932997.cms

0

Share