This robot taught itself how to walk using artificial intelligence
A walking robot is certainly a spectacle millions have witnessed but when it is about a robot learning to walk from scratch, its definitive that the audience count narrows down to… none! A group of Google researchers have developed a robot that analyzes its surroundings and learns fully how to walk in duration of only just a few hours. Hence concluded, robots learn how to walk way faster than humans.
Writing a program that allows a robot how to walk is a rather strenuous process and it usually can be executed in two ways i.e. The programmer can either code in every single baby robot step or provide it with a matrix in which it learns after a series of trial-and-error training. Both these methods are extremely time consuming so the researchers with Google employed reinforcement learning that enables the robot to examine its surroundings and gather extensive information through repeated trials and rewarding successful attempts. The researchers modeled the testing stage in a manner that enables their Minotaur robot to wander around a physical environment before coming across the varying terrains of the trial, like a soft mattress, a doormat or a flat surface. This revolutionized the simulation stage of reinforcement learning.
Deep Reinforcement Learning allowed the robot to teach itself how to walk
The lead author of the research and an assistant professor at Georgia Institute of Technology, Sehoon Ha says that it is not easy to build an efficient and accurate simulations for a robot to explore and even though every crack in the asphalt can be modelled, it still wouldn’t be of any help when the robot is carried down to an unknown path in the real world. Ha says in a paper,” For this reason, we aim to develop a deep [reinforcement learning] system that can learn to walk autonomously in the real world,”
One of the major engineering problems that were encountered during the learning process is the robot’s extreme trip and fall capacity. The researchers overcame that problem by ensuring that the robot learns multiple steps in a single go because if the primary subject is to learn forward movement, the learning field would reach its end and the robot would trip so instead the robot moves forward and backward simultaneously and gets time to reset. This modification was complete success and it eliminated the requirement of manual resetting. Other challenge was to make the robot completely automated and capable of learning the complete walking process with zero human interference but that hasn’t been possible yet as a hard command that makes the robot stand up after falling was fed into it. The team hopes that they would be ultimately able to automate that part also.
Entering this new domain of machine learning, the extensive period of time consumed on processes like coding and simulation can instead be used to experiment the machine-surrounding interaction and this venture could open up doors for much sensitive applications of robots in more unfamiliar and aggressive environments like search and rescue missions and military deployments.
h/t: Technology Review