Accessibility links

Breaking News

Does Software that Explains Itself Really Help?


 A 3D-printed logo of Microsoft is seen in front of a displayed LinkedIn logo in this illustration taken June 13, 2016. (REUTERS/Dado Ruvic)
A 3D-printed logo of Microsoft is seen in front of a displayed LinkedIn logo in this illustration taken June 13, 2016. (REUTERS/Dado Ruvic)
Does Software that Explains Itself Really Help?
please wait

No media source currently available

0:00 0:05:17 0:00

Scientists who make artificial intelligence (AI) systems say they have no problem designing ones that make good predictions for business decisions. But they are finding that the AI may need to explain itself through another algorithm to make such tools effective for the people who use them.

AI is an area of computer science which aims to give machines abilities that seem like human intelligence.

“Explainable AI,” or XAI, is a new field that has received a lot of investment. Small, new companies and large technology companies are competing to make complex software more understandable. Government officials in the United States and European Union also want to make sure machines’ decision-making is fair and understandable.

Experts say that AI technology can sometimes increase unfair opinions about race, gender and culture in society. Some AI scientists think explanations are an important way to deal with that.

Over the last two years, U.S. government agencies including the Federal Trade Commission have warned that AI, which is not explainable, could be investigated. The European Union could also pass the Artificial Intelligence Act next year. That law would require explanations of AI results.

Supporters of explainable AI say it has helped increase the effectiveness of AI’s use in fields like healthcare and sales.

For example, Microsoft’s LinkedIn professional networking service earned 8 percent more money after giving its sales team AI software. The software aims to predict the risk of a person canceling a subscription. But the software also provides an explanation of why it makes a prediction.

The system was launched last July. It is expected to be described on LinkedIn’s website.

But critics say explanations of AI predictions are not trustworthy. They say the AI technology to explain the machines’ results is not good enough.

Developers of explainable AI say that each step in the process should be improved. These steps include analyzing predictions, creating explanations, confirming them and making them helpful for users.

But after two years, LinkedIn said its technology has already created value. It said the proof is the 8 percent increase in money from subscription sales during the current financial year.

Before the AI software, LinkedIn salespeople used their own abilities. Now, the AI quickly does research and analysis. Called CrystalCandle by LinkedIn, it identifies actions and helps salespeople sell subscriptions and other services.

LinkedIn said the explanation-based service has extended to more than 5,000 sales employees. It includes finding new workers, advertising, marketing and educational offerings.

"It has helped experienced salespeople by arming them with specific insights,” said Parvez Ahammad. He is LinkedIn's director of machine learning and head of data science applied research.

But some AI experts question whether explanations are needed. They say explanations could even do harm, creating a false idea of security in AI. Researchers say they could also create design changes that are less useful.

But LinkedIn said an algorithm's strength cannot be understood without understanding its “thinking.”

LinkedIn also said that tools like its CrystalCandle could help AI users in other fields. Doctors could learn why AI predicts that someone is more at risk of a disease. People could be told why AI recommended that they be denied a credit card.

Been Kim is an AI researcher at Google. She hopes that explanations show whether a system presents ideas and values people want to support. She said explanations can create a kind of discussion between machines and humans.

"If we truly want to enable human-machine collaboration, we need that,” Kim said.

I’m Dan Novak.

Dan Novak adapted this story for VOA Learning English based on reporting from Reuters.

_______________________________________________________________

Words in This Story

algorithm n. a set of steps that are followed in order to solve a mathematical problem or to complete a computer process

subscriptionn. an agreement that you make with a company to get a publication or service regularly and that you pay for regularly

gender n. the state of being male or female

analyze –v. to study something closely and carefully; to learn the nature and relationship of the parts of something by a close and careful examination

insight n. an understanding of the true nature of something

collaboration n. to work with another person or group in order to gain or do something

XS
SM
MD
LG