Your browser doesn’t support HTML5
Researchers have found a way to use software to teach a human-like robotic hand new skills. The discovery could one day make it less costly to train robots to do things that are easy for humans to do.
The researchers are with OpenAI, a nonprofit artificial intelligence research group established in 2015. The group receives support from several well-known investors and inventors, including Elon Musk of Tesla Incorporated.
OpenAI researchers announced last week that they had taught a robotic hand to recognize different qualities of a colorful object. The object was a small, six-sided box with different numbers and colors on each side. The researchers wanted the robotic hand to show them a specific side of the box. The machine was able to turn the box around until the desired side was showing.
The act may seem simple. But it demonstrates a major improvement in how the machine learned to do what the researchers were asking of it.
All the learning happened inside a software-based re-creation of the real world, or a simulation. The machine was then able to bring everything it had learned into the physical world.
That jump helps solve what had been a major problem for robotic hands.
People and companies have been able to buy robotic hands for years. But the hands are difficult for engineers to program. Engineers can write specific computer code for each new action. That, however, requires a costly new program each time.
The engineers also can give the robots software that lets them ‘learn’ through physical training. But physical training can take months or even years. There are other issues with it, too. For example, if a robot hand drops something, a human needs to pick it up and put it back. That adds to costs, as well.
OpenAI researchers have sought to divide those years of physical training into shorter periods. Then, they spread them between a number of computers with a software simulation that can do the training in hours or days -- without human help.
Ken Goldberg is a University of California Berkeley robotics professor. He was not involved in the OpenAI research. But he did review the work released in late July. He called it “an important result” in getting closer to the goal of having self-taught machines.
In the real world, unexpected things happen. In many cases, a robot’s physical training or code might not prepare it for how to react to an unexpected situation. But the OpenAI researchers included “noise” in their software simulation. That way, the robot would have to learn how to deal with conditions that might interfere with its task.
Lilian Weng is a member of the technical staff at OpenAI who worked on the research. She told the Reuters news agency that the group now aims to teach the robotic hand even more complex tasks.
I’m Pete Musto.
Stephen Nellis reported this story for the Reuters news service. Pete Musto adapted it for VOA Learning English. Ashley Thompson was the editor.
We want to hear from you. What other skills do you think these kinds of software simulations will be able to teach to robots? Write to us in the Comments Section or on our Facebook page.
______________________________________________________________
QUIZ
Server error
Oops, as you can see, this is not what we wanted to show you! This URL has been sent to our support web team to the can look into it immediately. Our apologies.
Please use Search above to see if you can find it elsewhere
________________________________________________________________
Words in This Story
artificial intelligence – n. an area of computer science that deals with giving machines the ability to seem like they have human intelligence
specific – adj. special or particular
re-creation – n. something that is made to look, feel, or behave like something else especially so that it can be studied or used to train people
code – n. a set of instructions for a computer
review – v. to look at or examine something carefully especially before making a decision or judgment
task(s) – n. a piece of work that has been given to someone