Technology

Would you like to live forever – as a chatbot?

The profile of a child robot gazing into the future.
April 11, 2016

Read time 4 min

“Things that were, things that are… and some things that have not yet come to pass.” I wish I had Galadriel’s dramatic voice telling what the future will look like, as we are rapidly entering a time that still felt like a distant scenario not long ago.

In the last few years, artificial intelligence, robots and connected devices have expanded into a whole new context: our everyday lives. They’ve also started making the headlines. We read how the Go-playing AI AlphaGo beat the world-class Go player Sedol by 5–1, and how toy robots like Pleo and social robots like Jibo and Pepper are coming to our homes.

The robot is watching

I listened to Kate Darling, from MIT Media Lab, speak at the Interaction 16 conference in Helsinki last month. Kate told us how we naturally anthropomorphize robots even when they haven’t been designed for that – we give them names and funerals, we laugh when they make mistakes, and we become angry if someone “hurts” them, like in the case of the Boston Dynamics Spot dog or the friendly hitchhiking robot, HitchBot.

Anthropomorphism is something that is innate to us humans: we project human emotions on animals, stuffed animals and robots. If a robot has eyes, it makes us think it has a personality, even when we know the personality is not real. The more human-like features robots have, the more humans place trust in them.

There is encouraging research that social robots can be used for good. Nao robots and huggy bear robots have been used for helping children with autistic spectrum disorder to engage socially and learn languages, for instance. We have also been touched by the story of how Siri became a non-judgmental friend and teacher to an autistic boy called Gus. But when we talk about robots or AI chatbots, the same concerns always rise: will people be left alone with robots? Will they learn too much intimate data about us? Will they, eventually, take over and replace humans?

Helsingin Sanomat interviewed a couple from Japan who had Pepper in their home. The couple said they never talk personal things when Pepper is around. Some people felt awkward undressing even in front of Aibo dog, as they felt that “Aibo’s watching”. On one hand, robots can gather a lot of privacy data on us which could be stored in the cloud, causing security concerns. On the other hand, the more information a robot has about you, the more fun it is to interact with. One could say that fulfilling interaction comes with a price: our data.

Laws of interaction

Soon it will be hard to separate whether you are talking to a bot with an AI or a real call center person. AI’s such as Amelia are likely to put many people out of their job in the near future. We are already dealing with semi-cyborg telemarketers, for example.

In the past, people have been offended by google glass – but could an earpiece with AI become a new social norm? Will we all have our own “Jane” in our ear, giving us advice? What if we actually fell in love with an AI like in the movie “Her” or “Ex Machina”, and because of that, ended up paying an infinite amount of money for the required tech support?

We have to find new limits and social norms for AI and robots. Like Christopher Noessel puts it: “At the moment there are Asimov’s 3 laws of robotics, but in the future we need 3 billion laws of agency.” How would you feel about the chance to live forever in the form of an AI chatbot? Eternime keeps a part of yourself in AI even when you’re dead. How would it make your friends or family feel?

Progressive regression

What if we are just bad teachers to our social robots? We tend to cross a road on a red light, eat unhealthy food, drink alcohol, and go to bed too late. So how can we be sure AI will be developed into a good shepherd for the human society, when we sometimes can’t stick to good habits ourselves? Is it OK to hurt or punish a robot if it’s misbehaving because we have been bad teachers? Or is a robot going to be punished for something we really did – a modern scapegoat?

AI can show the state of humanity in social media even better than a real person. Twitter society taught Tay, the Microsoft AI powered “teen” bot, to become a racist, Hitler-loving sex robot in less than 24 hours. Internet created a monster, when Microsoft had the best intentions to make Tay a smart and funny teen bot by letting her chat with humans. I hope that the future robots in our homes won’t have connected networks, so that when one robot learns something creepy, it would be a bad influence to the rest of them. That would be way too… human.

Never miss a post