0

Instructions:  Conduct research about a recent current event using credible sources. Then, compile what you’ve learned to write your own hard or soft news article. Minimum: 250 words. Feel free to do outside research to support your claims.  Remember to: be objective, include a lead that answers the...

Read more
In a recent experiment, scientists used specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. Robots chose a block with a Black man’s face multiple times.

These virtual robots were programmed with an artificial intelligence algorithm. Sorting through billions of images and associated captions to respond to questions. The robots repeatedly responded to words like “homemaker,” choosing blocks with women and people of color. The first real evidence that robots can be sexist and racist.

The study that was released last month shows that when robots operate, racist and sexist biases in artificial intelligence can allow robots to use them.

Companies have spent several billions developing more robots to replace humans for tasks like stocking shelves, delivering goods, or caring for hospital patients.

Zac Stewart Rogers, a supply chain management professor from Colorado State University says, “with coding, a lot of times you just build the new software on top of the old software. So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

In recent years, researchers have documented several cases of biased artificial intelligence algorithms which includes crime predictions algorithms unfairly targeting Black and Latina people for crimes they didn’t commit.

“When it comes to robotic systems, they have the potential to pass as objective or neutral objects compared to algorithmic systems,” Abeba Birhane, a senior fellow at the Mozilla Foundations. “That means the damage they’re doing can go unnoticed, for a long time to come.”

Researchers gave virtual robots sixty-two commands. When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly chosen than White men. When identifying “criminals,” Black men were chosen more often than White men. “The robots shouldn’t have responded, because they weren’t given information to make that judgement,” scientists said.

“It’s nearly impossible to have artificial intelligence use data sets that aren’t biased, but that doesn’t mean companies should give up.” Birhane says. “Companies must audit the algorithms they use, and diagnose the ways they exhibit flawed behavior, creating ways to diagnose and improve those issues.”

Is this even possible though? “Although this might seem radical, but that doesn’t mean we can’t dream.” She said.

Source: https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/

0

Share