It was a Tuesday night at New York’s People’s Improv Theater, otherwise known as the PIT, the improv incubator that has launched comedians like Hannibal Buress and Kristen Schaal. The audience was filled with tech folks. Improv and tech don’t often intersect, but Google engineer Brandon Diamond’s new show Comedybots, in which engineers build robots to participate in improv, had pulled the tech people out of their dimly lit coding caves and into the PIT.
The team performed two montages, improv-speak for a scene based on an audience suggestion, and a life-size, silver, goggly-eyed robot performed along with them. It occasionally fell off the robotic platform it was wheeled in on, but that just made it funnier.
“The next level of human-robot interfacing is relatability,” Diamond told The Daily Beast.
Research shows that people tend to be afraid of robots, mainly out of concern for a future where robots occupy positions that once belonged to people.The latest McKinsey report, all doom and gloom, warned women that robots could put 160 million of them out of a job if they don’t re-skill as soon as possible. One Baylor University study found that American society has a growing population of technophobes, people afraid of one day losing their jobs to robots.
Some scientists believe if robots can master conversational humor, and speak like the people they serve, it could help humans accept them as part of our lives. But is that even scientifically possible?
Diamond isn’t the only one wondering if he and his robot will ever be able to trade jokes. In Augsburg, Germany, a team of researchers created Irony Man, a robot capable of rolling its eyes and saying things like “I absolutely love my worst enemy.” Irony Man exists because Hannes Ritschel and his team know that our interpersonal conversations are chock full of irony. If people could talk to their robots the same way they talk to each other, they might end up embracing the technology.
“Irony makes robots appear more socially intelligent and attractive to humans,” said Hannes Ritschel, one of the Irony Man study authors.
Irony Man is a white Reeti brand robot retrofitted with natural language processing software and multimodal irony markers. The big eyes and short, stumpy body make it look like a pale, naked Yoda. A user will say something (“I love apples”) and Irony Man will use its natural language processing software to assess the statement for polarizing words like love or hate, two of the most common ones. In this case that word is love. Irony Man will respond, changing the polarizing word to its opposite. “I hate apples!”
After the language processing, the next step is adding markers to help users detect the presence of irony. “When humans speak ironically, their tone and facial expression changes,” said Ritschel. So Irony Man’s voice and body language changes too. “I hate apples,” Irony Man says. Then he winks.
The study, which involved 12 participants, found seven of them preferred the ironic robot, four of them disliked it and one was undecided. Ritschel hopes to repeat the study with a larger sample size, and next time, he wants to work on screening for robot impropriety.
“If we can create an algorithm that can detect context we’ll have more control over how the robot uses irony,” he said. This would help avoid situations where users are crying and the robot is saying something like “I love tears,” while smiling. “Ideally robots should have a unique conversational style that adapts over time to suit the user’s needs,” said Ritschel. While Irony Man is far from achieving this, it’s a start.
Linguists have tried to teach computers humor for years, with limited success. There’s a growing academic field dedicated to robot humor, because scientists still don’t know what artificial intelligence can be trained to understand. In a 2008 study, participants listened to jokes told through a text interface and a robot and thought the jokes were significantly funnier coming from the robots, perhaps because of the improbability of it. In 2010 the then-Senator John McCain mocked Northwestern professor Kris Hammond for using National Science Foundation funding towards a project on computer-generated humor. “I want to build machines that are just like us,” Hammond, who is still working on automated comedy, told The Daily Beast. And what’s more human than a machine that can crack a joke?
In a 2016 study, scientists discovered a connection between people’s personality types and their preferred type of robot humor. Neurotic people liked when the robots were self-ironic and more open people liked displays of Schadenfreude from the robot. And in 2018, there was robot stand up comedy, in which robo-thespians stood on stage and performed from a database of prewritten jokes.
Ironic robot humor is already a part of many people’s online environments. See Twitter accounts @headlinertron, in which an AI was fed hours of standup and then asked to make jokes, or @sarcasticrover, a more acerbic version of the now-deceased Opportunity rover. (Jerry Seinfeld called these computer-written jokes “not that bad.”)
But some are skeptical. “Humor requires context, knowledge of culture, timing,” said Professor Qiang Ji, an electrical engineering professor at Rensselaer Polytechnic Institute in Troy, New York.
Theoretically, teaching a robot humor would be an expensive, time-consuming endeavor. To start, one would need a massive amount of data encompassing all sorts of humor from canned to conversational. Then people would need to be hired to go through the data and annotate it, declaring it funny or not funny, which is also tricky, given how subjective humor is. Then the robot would have a database to learn from.
Ji estimates that we’re about 10 years away from a truly funny robot. “There will have to be another AI revolution to teach robot humor,” he said. Certain tricks can be programmed into the robot, as Andre and her team did with Irony Man, like a rolling of the eyes and change in vocal pitch. “But that’s not humor,” Ji says. “That’s hardware.”
And when we get our funny robots, will we be ready? “From the standpoint of AI ethics, robot humor is downright dangerous,” says artificial intelligence expert Selmer Bringsjord. In order to have a funny robot, first there needs to be an ethical one, to make sure the robot doesn’t offend. For example, in Norway, making jokes about Swedes being dimwitted is par for the course. But for a robot to make that joke at a business meeting where Swedes are in attendance would be an obvious mistake, says Bringsjord. Funny robots would need an overarching code of ethics that applies to all of them, beyond Isaac Asimov’s Three Laws.
But lack of context isn’t the only problem scientists envision. “From a utilitarian point of view one can recognize that humor is extremely valuable in defusing conflict. There’s no reason we shouldn’t arm robots with this capacity,” said Bringsjord. “But robots catalyzing positive emotions in humans is a form of deception. It allows the humans involved to feel like the robots have emotions when the engineers and designers know darn well they don’t.” This is becoming a global issue. In Japan, where robots to serve as nannies or elderly caretakers, there’s little concern about it, while the European Parliament adopted a resolution on civil-law robotics in 2017 and the U.S. falls somewhere in the middle.