0

Instructions:  Conduct research about a recent current event using credible sources. Then, compile what you’ve learned to write your own hard or soft news article. Minimum: 250 words. Feel free to do outside research to support your claims.  Remember to: be objective, include a lead that answers the...

Read more
As a part of a recent experiment, scientists asked specifically programmed robots to scan peoples’ faces on blocks, then put the “criminal” block in a box. The robots repeatedly chose a block with the beginning of an African American man. Those virtual robots, programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to answer that question and others of the sort and may represent the first evidence that robots can be sexist and racist, according to researchers. Repeatedly, the robots responded to keywords like “homemaker” and “janitor” by choosing blocks with faces of women and people of color.

The influential study was released a month ago, in June of 2022, and conducted by known institutions, including the Johns Hopkins University and the Georgia Institute of Technology.

The researchers in the study alleged that racist and sexist biases making their way into artificial intelligence systems could transfer into robots that use them to help guide their operations.

Companies have been giving billions of dollars to developing more robots to help replace humans for physical tasks such as stocking shelves, delivering goods, or even caring for hospital patients. Experts describe the current view of robotics as something like a gold rush heightened by the pandemic and a resulting labor shortage. However, tech ethicists and researchers are warning that the quick adoption of new technology could result in heavy unforeseen consequences as that technology becomes more and more advanced.

“With coding, a lot of times you just build the new software on top of the old software,” a supply chain management professor from Colorado State University, Zac Stewart Rogers, said. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers have documented multiple cases of biased artificial intelligence algorithms in recent years. That includes crime prediction algorithms targeting African American and Latino people unfairly for crimes they have never committed, as well as facial recognition systems having a rather difficult time accurately identifying people of color.

Link: https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/.

0

Share