Monday, 25 December 2017

The "uncanny valley" effect might be a learned behavior

The "uncanny valley" is the well-known propensity for us to get super creeped out by robots that look nearly, but not completely, human. The hypothesis, for years, is that uncanny imagery unsettles us because the not-quite-there humanoid faces trigger our unconscious instinct to avoid people with horrible, infectious diseases.

But maybe there's more going on here. Kimberly Brink and her colleagues recently conducted an experiment that complicates things in an interesting way. They took 240 children, ages 3 to 18, and showed them videos of three different robots, ranging from a classically uncanny, humanoid-style one to cutesy nonhuman bots along the lines of Wall-E. They probed the kids' reactions, including, crucially, whether they were creeped out by the robots.

The result? Kids younger than 9 didn't find the humanoid bot unsettling -- but kids older than 9 did. This suggests that our sense of the uncanny valley is a learned behavior, not something purely innate. Interesting!

Secondly, and more provocative yet, is Brinks' second finding: What chiefly disturbed the older kids was the idea that the robots had a distinct mind. The kids didn't like that. The younger kids, however did. As Brink writes:

Children's attributions of mind to the robots affect how children feel. The younger children preferred a robot when they believed the robot could think and make decisions. For them, the more mind the better. This is in contrast to adults and older children for whom the more robots seemed to have minds (and especially minds that could produce and house human-like feelings and thoughts) the creepier that made them. For adults and older children, a machine-like mind is fine, but a robot with a human-like one is out of bounds. Perceived creepiness is related to a perceived mind. This finding also highlights the importance of thinking developmentally.

I wonder if this will have implications for how designers of consumer AI fashion their bots. I've assumed, up until now, that what I principally find disturbing about Alexa-like voicebots is the idea of corporate hardware that perpetually eavesdrops on your conversations. And I'm sure that's still true! But maybe what's triggering me is also Amazon/Google/Microsoft etc's attempts to create the sense of an intentional mind in these everyday AIs.

The paper by Brinks and her colleagues writing up their findings -- "Creepiness Creeps In: Uncanny Valley Feelings Are Acquired in Childhood" -- is here, albeit paywalled.