eans for Us
Robots are known to be emotionless, expressionless machines that learn what they are told, so the fact that they are starting to adapt racist and sexist behaviors in analyzing data is an unsettling development.
Pranshu Verma from The Washington Post explains, “[a]s part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples’ faces on them, then put [the block they associate with a particular word] in a box” after looking through data. At first glance, this idea could give way to some of the most useful bots in history.
However, the robots exhibited racism and sexism in the types of people that were chosen in response to particular words: “the robots responded to words like ‘homemaker’ and ‘janitor’ by choosing blocks with women and people of color,” and “scientists said [that] the robots should not have responded, because they were not given information to make that judgment.”
This issue is truly troubling; if such patterns persist in the future, robots could be misled to follow stereotypes and problematic statements that are widely accepted. Categorizing people of particular ethnicities or sexualities into generalized statements is not only negative encouragement to people but also restricts the robots’ efficiency and skill in analyzing data, and could render them useless in the future.
However, it is possible to correct such behaviors. Abeba Birhane from the Mozilla Foundation says that “companies must audit the algorithms they use, and diagnose the ways they exhibit flawed behavior, creating ways to diagnose and improve those issues.”
“This might seem radical,” Birhane states. “But that doesn’t mean we can’t dream.”
Source: https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/
Robots are known to be emotionless, expressionless machines that learn what they are told, so the fact that they are starting to adapt racist and sexist behaviors in analyzing data is an unsettling development.
Pranshu Verma from The Washington Post explains, “[a]s part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples’ faces on them, then put [the block they associate with a particular word] in a box” after looking through data. At first glance, this idea could give way to some of the most useful bots in history.
However, the robots exhibited racism and sexism in the types of people that were chosen in response to particular words: “the robots responded to words like ‘homemaker’ and ‘janitor’ by choosing blocks with women and people of color,” and “scientists said [that] the robots should not have responded, because they were not given information to make that judgment.”
This issue is truly troubling; if such patterns persist in the future, robots could be misled to follow stereotypes and problematic statements that are widely accepted. Categorizing people of particular ethnicities or sexualities into generalized statements is not only negative encouragement to people but also restricts the robots’ efficiency and skill in analyzing data, and could render them useless in the future.
However, it is possible to correct such behaviors. Abeba Birhane from the Mozilla Foundation says that “companies must audit the algorithms they use, and diagnose the ways they exhibit flawed behavior, creating ways to diagnose and improve those issues.”
“This might seem radical,” Birhane states. “But that doesn’t mean we can’t dream.”
Source: https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/