Researchers at the Massachusetts Institute of Technology (MIT) have been combining a cohort of generative AI models to help robots more efficiently solve complex object manipulation problems.
The research focused on improving how robots plan their movements, especially when dealing with complex tasks like packing objects.
This kind of planning requires considering various factors, like the shape of objects, avoiding collisions and making sure everything stays stable.
Traditionally, robots used different methods for each of these factors. For instance, one method might be used to figure out how to pick up an object, while another one was needed to decide where to put it. However, complex tasks need a more versatile approach that considers all these factors at once.
The challenge is that building one single method that can handle everything is tough because there’s not always enough data to teach the robot. The researchers looked at combining different methods, each one solving a specific part of the problem, and making sure they work together smoothly. This way, the robot can plan its actions more efficiently, without getting stuck in complex situations.
The study pointed out robots face complex challenges in continuous constraint satisfaction problems during multistep manipulation tasks, such as packing or table setting.
These problems involve diverse constraints, spanning geometric, physical and qualitative aspects, influenced by object geometry and human-specific requirements.
To address these challenges, MIT researchers introduced Diffusion-CCSP, a machine-learning technique. Diffusion models iteratively refine their output by learning how to generate new data samples resembling those in a training dataset. In practice, these models enhance potential solutions gradually, starting from a random, poor solution to efficiently solving complex problems.
From the perspective of Gartner analyst Pedro Pacheco, the MIT research appears to be less of a breakthrough and more of an evolution in the use of AI to improve robot capabilities, which he says could have an impact across multiple manufacturing sectors.
“It’s an important step forward, but you have to also consider that it goes beyond just teaching a robot to perform these tasks in a research setting,” he says. “Once you go implement this technology into practice for a manufacturer or a supply chain company, it will depend on a number of situations.”
The central factor is whether the objects that needs to be packed can be successfully manipulated by the robot.
“It’s not just about the intelligence of the robot, the challenge is if a robot also has the dexterity to be able to do it, which is an area where robots still lag behind humans,” Pacheco says. “It’s still early to tell if this concept will fly or not, because it doesn’t appear that this is something that is being put into production.”
Infosys executive vice president and global head of manufacturing Jasmeet Singh adds that in addition to the deployment of technology itself business and stakeholders are equally important when it comes to envisaging use cases, defining the business case, developing feasible solutions, and implementing AI in robotic uses cases, particularly as it relates to manufacturing.
“On the talent front, contrary to popular perception, implementing AI isn’t just about deploying AI and machine learning specialists,” he says. “There’s also a need for business process experts and data engineers. As AI becomes ever more powerful, there’s a need for Responsible AI specialists too.”
Finally, he notes organization change management experts are needed from an employee engagement and reskilling perspective.
“Given the talent crunch in these areas, manufacturers will need to scale up inhouse talent while also engaging with select partners,” he says.
From his perspective, the successful deployment of AI-aided production and manufacturing robots requires a skilled combination of data and people.
“To begin with, the data needs to be captured from the edge – whether it’s a production machine, lab equipment or autonomous mobile robot,” he explains. “It also needs to be captured at multiple levels so that a comprehensive picture can be drawn, and the dots can be connected by AI.”