Why Robot Humor Mostly Falls Flat

In this image released on Monday, Aug. 6, 2018, Sophia, a humanoid robot developed by Hanson Robotics, will welcome visitors to the new.New Festival in Stuttgart taking place on October 8-10 at the Hanns-Martin-Schleyer-Halle. (Hanson Robotics Limited/COD

Your browser doesn’t support HTML5

Why Robot Humor Mostly Falls Flat

Robots are increasingly being developed to think and act like humans. But one common human quality that has been difficult for engineers to recreate in machines is humor.

Most robots are powered by artificial intelligence, or AI, and machine learning technology. Some have performed better than humans in tests designed to measure machine intelligence.

For example, we have reported on experiments involving robots competing against humans in a reading test and in a live debate.

Computer scientists have also hoped to give robots technical skills to help them recognize, process and react to humor. But these attempts have mostly failed. AI experts say that in many cases, attempts to make robots understand humor end up producing funny results – but not in the way they were supposed to.

Dan Zafrir, left, and Noa Ovadia, right, prepare for their debate against the IBM Project Debater in San Francisco, June 18, 2018.

Context is everything

Kiki Hempelmann is a computational language expert who studies humor at Texas A&M University-Commerce. “Artificial intelligence will never get jokes like humans do,” he told the Associated Press. The main problem, Hempelmann says, is that robots completely miss the context of humor. In other words, they do not understand the situation or related ideas that make a joke funny.

Other experts who study the subject agree that context is very important to understanding humor – both for humans and for robots.

Tristan Miller is a computer scientist and linguist at Darmstadt University of Technology in Germany. He also spoke to the AP. In one research project, he studied more than 10,000 puns.

Puns are a kind of joke that uses a word with two meanings. For example, you could say, “Balloons do not like pop music.” The word “pop” can be a way of saying popular music; or, “pop” can be the sound a balloon makes when it explodes.

HSBC Bank welcomes SoftBank Robotics' humanoid robot, Pepper, to their team at the Fifth Ave branch on Monday, June 25, 2018 in New York. (Mark Von Holden/AP Images for HSBC)

But a robot might not get the joke. Tristan Miller says that is because humor is a kind of creative language that is extremely difficult for computer intelligence to understand.

“It’s because it relies so much on real-world knowledge,” Miller said. This includes background knowledge and common sense knowledge. “A computer doesn’t have these real-world experiences to draw on. It only knows what you tell it and what it draws from,” he added.​


Humor is job security for humans?

Allison Bishop is a computer scientist at New York’s Columbia University. She also performs stand-up comedy. She told the AP a big problem is that machines are trained to look for patterns.

Comedy, on the other hand, relies on things that stay close to a pattern, but not completely within it. To be funny, humor must also not be predictable, Bishop said. This makes it much harder for a machine to recognize and understand what is funny.

Bishop says since robots have great difficulty understanding humor, she feels like it gives her better job security as a comedy performer. It even made her parents happy when her brother decided to become a full-time comedy writer because it meant he would not be replaced by a machine, she added.​

In this April 26, 2018, photo, a robot entertains visitors at the booth of a Chinese automaker during the China Auto 2018 show in Beijing, China. (AP Photo/Ng Han Guan)

The risks of teaching humor to AI systems

Purdue University computer scientist Julia Rayz has spent 15 years trying to get computers to understand humor. The results, she says, have at times been laughable.

In one experiment, she gave the computer two different groups of sentences. Some were jokes, others were not. The computer kept mistaking things as jokes that were not. When Rayz asked the computer why it thought something was a joke, the answer made complete sense technically. But she said the results coming from the computer were not funny, nor memorable.

Despite the difficulties, Darmstadt University’s Miller says there are good reasons to keep trying to teach humor to robots. It could make machines more relatable, especially if they can learn to understand sarcasm, he noted. Humans use sarcasm to say one thing but mean another.

But Texas A&M’s Kiki Hempelmann is not sure such attempts are a good idea. “Teaching AI systems humor is dangerous because they may find it where it isn’t, and they may use it where it’s inappropriate,” he said. “Maybe bad AI will start killing people because it thinks it is funny,” he added.

I’m Bryan Lynn.

The Associated Press reported on this story. Bryan Lynn adapted it for VOA Learning English. Kelly Jean Kelly was the editor.

Do you think machines will ever be developed enough to have interpersonal relationships with humans? Write to us in the Comments section, and visit our Facebook page.

_____________________________________________________________

Words in This Story

artificial intelligence n. the ability of a machine to use and analyze data in an attempt to reproduce human behavior

context n. the words that are used with a certain word or phrase and that help to explain its meaning

linguist n. someone who studies human speech

pun n. a joke that uses a word that has two meanings

rely v. to need for support, to depend on

pattern n. a particular way something is done or repeated

sarcasm n. the use of words that mean the opposite of what you really want to say especially in order to insult someone, to show irritation, or to be funny​

inappropriate adj. not suitable