Synergizing science and the KAN. Credit: Physical Review X (2025). DOI: 10.1103/4t7t-v19l
AI has successfully been applied in many areas of science, advancing technologies like weather prediction and protein folding. However, there have been limitations for the world of scientific discovery involving more curiosity-driven research. But that may soon change, thanks to Kolmogorov-Arnold networks (KANs).
A recent study, published in the journal Physical Review X, de…
Synergizing science and the KAN. Credit: Physical Review X (2025). DOI: 10.1103/4t7t-v19l
AI has successfully been applied in many areas of science, advancing technologies like weather prediction and protein folding. However, there have been limitations for the world of scientific discovery involving more curiosity-driven research. But that may soon change, thanks to Kolmogorov-Arnold networks (KANs).
A recent study, published in the journal Physical Review X, details how this new kind of neural network architecture might help scientists discover and understand the physical world in a way that other AI can’t.
The black box problem
The authors of the new study describe two different kinds of science: curiosity-driven and application-driven. Both are important and have ultimately led to many technologies and a better understanding of how the universe works. While AI has already proved successful in application-driven science, current AI models often lack the interpretability needed for gaining knowledge in curiosity-driven science, making them what the study authors refer to as "black boxes."
The study authors provide an example of how black box AI has worked for application-driven science.
"Another example is AlphaFold, which, despite its tremendous success in predicting protein structures, remains in the realm of application-driven science because it does not provide new knowledge at a more fundamental level (e.g., atomic forces). Hypothetically, AlphaFold must have uncovered important unknown physics to achieve its highly accurate predictions. However, this information remains hidden from us, leaving AlphaFold largely a black box. "
Kolmogorov-Arnold networks and interpretability in scientific discoveries
KANs may be a way to break out of the black box. These neural networks can be used to identify important features, reveal modular structures and discover symbolic formulas in scientific data. The team says KANs are able to decompose higher-dimension functions into one-dimensional functions, increasing interpretability by symbolically regressing the 1D functions.
In order to use this technology, the team developed tools for embedding scientific knowledge into the KANs and extracting it back out by enhancing them with multiplication nodes, called MultKANs. They also used tools like the "kanpiler," which compiles symbolic formulas into KANs, and a tree converter that visualizes network structure as modular trees. This process allows researchers not only to gain insight into what the KAN learned, but how it learned what it learned, making the fundamental science behind the knowledge interpretable.
Testing known physics
The group tested out the model on various well-known physics concepts, like conservation of energy and momentum, finding the Lagrangian of a simple pendulum and a relativistic mass, revealing hidden symmetry in a nonrotating black hole and quantifying the stress–strain relationship for a neo-Hookean solid. For each scenario, the KANs took in relevant data and came up with correct physical laws for each problem, and even achieved "extreme precision."
However, the team notes that the model has limits on scalability. They write, "Although the learnable univariate functions in KANs are more interpretable than weight matrices in multilayer perceptrons (MLPs), scalability remains a challenge.
"As KAN models scale up, even if all spline functions are interpretable individually, it becomes increasingly difficult to manage the combined output of these 1D functions. Consequently, a KAN may only remain interpretable when the network scale is relatively small."
The future of scientific discovery
The authors describe KANs as interpolating between two different kinds of "software," referred to as "software 1.0," meaning traditional software and "software 2.0," referring to other neural networks. They say KANs have the ability to balance the trade-offs between these two paradigms, where software 1.0 has greater interpretability, which allows for manipulation by users, and software 2.0 has learnability.
Despite the current issues with scalability as network size increases, this model represents a potential boon for curiosity-driven scientific discovery. The hope is that this new blend of learnability and interpretability will eventually accelerate breakthroughs by helping scientists understand the otherwise black boxed AI-generated insights.
With more work, the KAN framework may be applicable to larger-scale and more complex scientific problems, and will likely extend to other disciplines beyond physics.
Written for you by our author Krystal Kasal, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You’ll get an ad-free account as a thank-you.
More information: Ziming Liu et al, Kolmogorov-Arnold Networks Meet Science, Physical Review X (2025). DOI: 10.1103/4t7t-v19l
© 2025 Science X Network
Citation: Kolmogorov-Arnold networks bridge AI and scientific discovery by increasing interpretability (2025, December 22) retrieved 22 December 2025 from https://phys.org/news/2025-12-kolmogorov-arnold-networks-bridge-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.