Grounding LLMs For Robot Task Planning Using Closed-loop State Feedback
We introduce a novel planning algorithm that integrates Large Language Models (LLMs) into robotic task execution, enhancing multi-agent and object-based planning. Inspired by the human brain-body system, our method employs two LLMs in a hierarchical structure, refining plans through closed-loop feedback that adapts to real-time environmental states and error messages. Tested in VirtualHome, our approach achieves a 35% improvement in task success rates, with an 85% execution score, nearing the human benchmark of 94%. We further validate its real-world potential using a realistic physics simulator and the Franka Research 3 Arm.