Understanding Potentially Biased Artificial Agents Powered by Supervised Learning: Perspectives from Cognitive Psychology and Cognitive Neuroscience

        Despite being machines, many artificial agents, similar to humans, make biased decisions. The present article discusses when a machine learning system learns to make biased decisions and how to understand its potentially biased decision-making processes using methods developed or inspired by cognitive psychology and cognitive neuroscience. Specifically, we explain how the inductive nature of supervised machine learning leads to nontransparent decision biases, such as a relative ignorance of minority groups. By treating an artificial agent like a human research participant, we then review how to apply neural and behavioral methods from the cognitive sciences, such as brain ablation and image occlusion, to reveal the decision criteria and tendencies of an artificial agent. Finally, we discuss the social implications of biased artificial agents and encourage cognitive scientists to join the movement of uncovering and correcting machine biases.

©2017 by Chinese Journal of Psychology 中華心理學刊. Proudly created with Wix.com

This site was designed with the
.com
website builder. Create your website today.
Start Now