- iLLM-A*: Hybrid AI to speed up path planning by a factor of 1000
- Scaling limits of existing algorithms
- Incremental learning to generate waypoints
- Empirically based selection of waypoints
The paper “A 1000× Faster LLM-enhanced Algorithm for Path Planning in Large-scale Grid Maps,” published on arXiv by researchers from the National University of Defense Technology in China, presents a new algorithm designed to significantly increase the efficiency of path planning on large-scale grid maps. The iLLM-A* approach combines a language model wit…
- iLLM-A*: Hybrid AI to speed up path planning by a factor of 1000
- Scaling limits of existing algorithms
- Incremental learning to generate waypoints
- Empirically based selection of waypoints
The paper “A 1000× Faster LLM-enhanced Algorithm for Path Planning in Large-scale Grid Maps,” published on arXiv by researchers from the National University of Defense Technology in China, presents a new algorithm designed to significantly increase the efficiency of path planning on large-scale grid maps. The iLLM-A* approach combines a language model with an optimized A* algorithm and aims to reduce the search time by a factor of more than 1000 compared to existing methods. Potential fields of application such as autonomous robotics, logistics planning, and AI control in complex simulations or video games could benefit from this.
Path planning in large, grid-based environments poses a significant computational challenge for traditional path-finding algorithms such as A* and Dijkstra. With increasing map size, time and memory complexity increase disproportionately, making real-time applications in areas such as robotics or the simulation of complex systems more difficult. Researchers are now presenting iLLM-A*, a hybrid approach that addresses these limitations.
Scaling limits of existing algorithms
The study first analyzes the weaknesses of the current state-of-the-art approach LLM-A*. This uses a Large Language Model (LLM) to generate a sequence of waypoints between which the A* algorithm then searches for shorter, local paths. Although this approach reduces the global search space, the researchers identified three major bottlenecks that limit its performance on large maps (defined as N ≥ 200, where N is the edge length of the grid):
-
The use of linear lists for the
OPEN
andCLOSED sets
(basically the lists of possible and already visited waypoints) in the A* algorithm leads to high time complexity in search and insertion operations. -
The globally managed lists grow strongly with the map size, which leads to high memory consumption.
-
LLMs are prone to “spatial illusion” and generate stochastic waypoints that can be redundant or deviate from the optimal route, making the subsequent A* search process inefficient. The algorithm iLLM-A* (innovative LLM-enhanced A*) presented by the team addresses these three issues with targeted optimisations:
-
The
CLOSED list
has been replaced by a hash-based data structure. This reduces the complexity of querying whether a node has already been expanded from O(N) to O(1) on average. -
A delayed update strategy for the heuristic values in the
OPEN list
avoids costly recalculations for the entire list size. -
The researchers use a two-stage collision detection. Firstly, a computationally efficient test checks for potential collisions using Axis-Aligned Bounding Boxes (AABB). A precise but more complex collision check is only carried out if the bounding boxes overlap.
Incremental learning to generate waypoints
To improve the quality of the waypoints generated by the LLM, the researchers have implemented a dynamic learning process. The system uses an expandable few-shot sample database for this purpose. Once the LLM has generated waypoints for a training map and the algorithm has planned a path, this path is evaluated using predefined metrics. These criteria include the deviation from the optimum path length as well as the time and memory consumption compared to a pure A* solution. Only if the planned path fulfills the quality thresholds is the pair of map and generated waypoints added to the database as a valid example. This enables the LLM to iteratively adapt its waypoint generation strategy to various environments.
Prompt template for the creation of waypoints using LLM
(Image: Yan Pan et. al)
Empirically based selection of waypoints
An empirical study by the researchers showed that a high number of waypoints increases the computational effort disproportionately without significantly improving the path quality. Based on these results, a simple, effective selection rule was implemented: if the LLM generates more than two waypoints, they use only the first two waypoints closest to the starting point for the subsequent A* search. This minimizes the number of A* searches to be carried out and thus the overall effort considerably.
According to the researchers, the evaluation on different map sizes (N = 50 to 450) showed a clear superiority of iLLM-A* over the comparison methods (A*, Opt-A*, LLM-A*). iLLM-A* achieved an average acceleration of 1000 times faster than LLM-A* in the search time for the shortest path. In extreme cases, the acceleration was even a factor of almost 2350 compared to LLM-A*. The memory requirement could be reduced by up to 58.6 percent compared to LLM-A*. On a card with N=450, iLLM-A* required 3.62 MByte, while the optimized A* algorithm (Opt-A*) alone required 28.67 MByte. Furthermore, the paths generated by iLLM-A* showed a lower average deviation from the optimal length (104.39 percent vs. 107.94 percent for LLM-A*). The standard deviation of the path length was significantly lower, which indicates a higher stability and predictability of the results, says the research team.
If this massive reduction in computing time and memory requirements can be applied to pathfinding in complex environments, this will open up a wide range of potential applications. Concrete scenarios range from the dynamic navigation of autonomous robots in logistics to the intelligent control of characters in large video game worlds and fast simulations in digital twins.
(vza)
Don’t miss any news – follow us on Facebook, LinkedIn or Mastodon.
This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.