Neuro-Symbolic Control with Large Language Models for Language-Guided Spatial Tasks

Authors

  • Momina Liaqat Ali Department of Computer Science, Middle Tennessee State University, Murfreesboro, TN, 37130, USA.
  • Muhammad Abid Department of Mechanical and Aerospace Engineering, University of Tennessee, Knoxville, TN, 37996, USA.
  • Muhammad Saqlain School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney, 81 Broadway, Ultimo, NSW, 2007, Australia.
  • Jose M. Merigo School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney, 81 Broadway, Ultimo, NSW, 2007, Australia.

DOI:

https://doi.org/10.54327/set2026/v6.i1.325

Keywords:

Neuro-Symbolic Control, LLMs, Language-Guided Robotics, Closed-Loop Control, Robotics, Deep Learning, Autonomous Systems

Abstract

Although large language models (LLMs) have recently become effective tools for language-conditioned control in embodied systems, instability, slow convergence, and hallucinated actions continue to limit their direct application to continuous control. A modular neuro-symbolic control framework that distinguishes between low-level motion execution and high-level semantic reasoning is proposed in this work. While a lightweight neural delta controller performs bounded, incremental actions in continuous space, a locally deployed LLM interprets symbolic tasks. We assess the suggested method in a planar manipulation setting with spatial relations between objects specified by language. Numerous tasks and local language models, such as Mistral, Phi, and LLaMA-3.2, are used in extensive experiments to compare LLM-only control, neural-only control, and the suggested LLM+Deep Learning (LLM+DL) framework. In comparison to LLM-only baselines, the results show that the neuro-symbolic integration consistently increases both success rate and efficiency, achieving average step reductions exceeding 70% and speedups of up to 8.83x while remaining robust to language model quality. The suggested framework enhances interpretability, stability, and generalization without any need of reinforcement learning or costly rollouts by controlling the LLM to symbolic outputs and allocating uninterpreted execution to a neural controller trained on artificial geometric data. These outputs show empirically that neuro-symbolic decomposition offers a scalable and principled way to integrate language understanding with ongoing control, this approach promotes the creation of dependable and effective language-guided embodied systems.

Downloads

Download data is not yet available.

Downloads

Published

01.04.2025

Issue

Section

Research Article

Categories

How to Cite

[1]
M. Liaqat Ali, M. Abid, M. Saqlain, and J. M. Merigo, “Neuro-Symbolic Control with Large Language Models for Language-Guided Spatial Tasks”, Sci. Eng. Technol., vol. 6, no. 1, pp. 52–72, Apr. 2025, doi: 10.54327/set2026/v6.i1.325.

Similar Articles

1-10 of 54

You may also start an advanced similarity search for this article.