Character Art Exchange

Robots Can Learn To Safely Navigate Warehouses

Robots Can Learn To Safely Navigate Warehouses




Robots have been working in factories for many years. But given the related safety concerns to the tasks they perform, most operate inside cages or behind safety glass to limit or prevent interaction with humans.To get more news about Logistics Robotics, you can visit glprobotics.com official website.

In warehouse operations, where goods are continuously sorted and moved, robots can be neither caged nor stationary. And while large corporations like Amazon have already incorporated robots into their warehouses, they are highly customized and costly systems where robots are designed to work within one facility on predefined grids or well-defined pathways under the guidance of specific, centralized programming that carefully directs their activity.

"For robots to be most useful in a warehouse, they will need to be smart enough to deploy in any facility easily and quickly; able to train themselves to navigate in new dynamic environments; and most importantly, be able to safely work with humans, as well as sizeable fleets of other robots," said Ding Zhao, the principal investigator and assistant professor of mechanical engineering at Carnegie Mellon University.
A team of CMU engineers and computer scientists have employed their expertise in advanced manufacturing, robotics and artificial intelligence to develop the warehouse robots of the future.

The collaboration was formed at the university's Manufacturing Future's Institute (MFI), which funds research with grants from the Richard King Mellon Foundation. The foundation made a lead $20 million grant in 2016 and gave an additional $30 million in 2021 to support advanced manufacturing research and development at MFI.

Zhao and Martial Hebert, the dean of the School of Computer Science and a professor at the Robotics Institute, are leading the warehouse robot project. They have investigated multiple reinforcement learning techniques that have shown measurable improvements over previous methods in simulated motion-planning experiments. The software used in their test robot has also performed well in path-planning experiments at Mill 19, MFI's collaborative workspace.

"Thanks to the advance in chips, sensors and advanced AI algorithms, we are at the cusp of revolutionizing the manufacturing robots," said Zhao. The team leverages previous work in self-driving cars to the development of warehouse robots that can learn multi-task path planning via safe reinforced learning, training robots to quickly adapt to new environments and operate safely with workers and human-operated vehicles.
The group first developed a method that could enable robots to continuously learn to plan routes in large, dynamic environments. The Multi-Agent Path Planning with Evolutionary Reinforcement (MAPPER) learning method will allow the robots to explore by themselves and learn by trial and error in a manner similar to the way human babies accumulate more experience to handle various situations over time.

The decentralized method eliminates the need to program the robots from a powerful central command computer. Instead, the robots make independent decisions based on their own local observations. The robots' capabilities will enable their onboard sensors to observe dynamic obstacles within a 10-30-meter range. With reinforced learning, robots will continually train themselves how to handle unknown dynamic obstacles.

Such smart robots can enable warehouses to employ large fleets of robots more easily and quickly. Because the computation is done with each robot's onboard resources, the computation complexity will increase mildly as the robot number increases, which will make it easier to add, remove or replace the robots.

Energy consumption could also be reduced when robots travel shorter distances because they are able to independently learn to plan their own efficient paths. And the "decentralized and partially observable" setting will reduce the communication and computation energy when compared to classical centralized methods.Another successful study applied the use of a constrained model-based reinforcement learning with the Robust Cross-Entropy (RCE) method.

Researchers must explicitly consider safety constraints for a learning robot so that it does not sacrifice safety in order to finish tasks. For example, the robot needs to avoid colliding with other robots, damaging goods or interfering with equipment in order to reach its goal.

"Although reinforcement learning methods have achieved great success in virtual applications such as computer games, there are still a number of difficulties in applying them to real-world robotic applications. Among them, safety is premium," said Zhao.

Creating such safety constraints that always factored in all conditions goes beyond traditional reinforcement learning methods into the increasingly important area of safe reinforcement learning, which is essential to deploying such new technologies.

You must be logged in to comment.

Register
Log in
Username

Password:

forgot your password?

or OpenID:
or Log in with Google