Empathy for the Machine

Now you can read us on your iPhone and iPad! Check out the BTRtoday app.

The field of robotics continues to advance, and so too does the promise for the myriad ways that AI can benefit our lives. From performing menial tasks to taking care of our loved ones, robots will soon be an everyday fixture of society. While they might offer valuable assistance as we begin to interact more with them socially, we are also learning that it might not always be so comfortable.

Creation of any kind is limitless in its potential, but subject to the strict laws of the physical world. A robot must meet our needs. It must adhere to our notions of a body, a mind, and a moral code. It must act appropriately, programmed within our specific parameters, because as humans we are very picky about what we like.

A robot plays by our rules. Its experience, as “Blade Runner’s” Roy Batty puts it bluntly, is “what it is to be a slave.” At least for now, robots are the underdogs.

When we watch videos of them being shoved to the ground, however, it might make us feel uneasy.

Boston Dynamics recently posted footage of a humanoid robot named Atlas, which would look right at home next to the goofy prototypes in “Robocop 2.” Atlas is skinny, hyperactive and absolutely CGI-ed out, until you realize that he’s real. Like everything Boston Dynamics makes, Atlas is incredibly impressive.

At the 1:28 mark, though, he is severely tested–his “trainer” suddenly smacks him with a lacrosse stick and taunts him with a square object. Things only go downhill from there. At 2:05, another unlikeable fellow beats Atlas with a pipe and completely knocks him down. The robot struggles to stand up again, gurgling and spinning its gears in agony. Eventually, Atlas springs back into action, hurling himself onto his feet; it’s impressive, but hard to watch.

“I mean they aren’t alive so they don’t have feelings, but I somehow feel bad when they get pushed,” writes one YouTube commenter. The hashtag #robotlivesmatter is littered throughout the discussion.

There is something intrinsically disturbing about creating a machine to make our lives better, only to make its life worse. Is this bullying? BTRtoday reached out to Carol Kranowitz, former teacher and author of the best-selling book, “The Out-of-Sync Child,” to define the naughty word.

“Bullying is the opposite of the golden rule,” says Kranowitz. “The bully’s aim is to demean or nullify another person’s thoughts, feelings, body, goals, etc. The only way the bully, who is weak and scared himself, can feel strong and powerful is to push other people around.”

But what if the other person is non-sentient? Does that make this type of behavior more acceptable?

“Whether the robot is sentient or non-sentient,” she continues, “treating it with contempt and cruelty is not acceptable in my view.”

Kranowitz believes that bullying negatively affects everyone involved. In her eyes, it’s equally harmful to both the victim and the tormentor. She posits that these oppressors aren’t born into the role, however, but rather are taught their destructive behavior.

So, the question remains: are we teaching AI to bully?

“Sure,” Kranowitz maintains. “Just as we’re teaching AI ways to wash cars.”

In her view, this represents more of a human dilemma. Bad behavior reflects on the one behaving badly. For instance, telling a robot they did a lousy job with the sponge during the car wash might not hurt the robot’s non-existent feelings, but the pangs of guilt experienced while belittling another seemingly sentient being could be enough to affect our collective self-esteem.

Another uncomfortable recent upload is CNBC’s “Hot Robot At SXSW” presented by Dr. David Hanson, the CEO of Hanson Robotics. In it, Hanson explains that Sophia, the robot on display, is designed “to look very human-like,” and this goes well beyond her fleshy exterior. Sophia contorts her face awkwardly and twitches her eyes and eyebrows throughout the conversation, giving her a feeling of relatability. Her discomfort appears genuine.

What has sparked outrage and made the clip go viral occurs approximately two minutes in, when Hanson asks whether or not she wants “to destroy humans.” After a pause, Sophia agrees that she will. Hanson jumps in with laughter and half-hearted regrets. It’s a funny moment, but Sophia seems genuinely perplexed (her face at 2:11 is the thing of nightmares). She does not know what to make of the situation, and this dissonance, of which she is perhaps not aware, is foremost in our minds. It’s hard not to have empathy for her in this moment.

Many YouTubers were furious that Hanson would even bring up the topic of human destruction.

“Does this guy want the apocalypse to happen or something?” wrote one. “THIS MAN IS THE HARBINGER OF THE ROBOT APOCALYPSE!” cried another. “Terminator… Skynet…. it’s happening.”

As Hanson explains on his website, the goal is for his robots to “evolve into ever smart beings,” but it’s a little distressing to think about them gaining certain types of knowledge. On the other hand, who are we to limit how intelligent robots can be? They will evolve into ever smart beings and, at a certain point, we’ll just have to give in and hope for the best.

Carol Kranowitz isn’t too worried about the looming prospect of a robot apocalypse anytime soon.

“I have fears of biological warfare, and a terrorist bomb on U.S. soil that will put Trump in the White House,” she admits, “but this is not uppermost in my mind.”

recommendations