Published on November 15, 2025 4:03 PM GMT
We’ll first define a complexity class related to generative AI :
Let FO be the class of function over finite strings of symbols which runs in time polynomial of their output length. FO is a class of efficient (polynomial) generators.
This is intended to capture the fact that a short prompt (hundreds of byte) can generate a huge file (like a megabyte picture, or a megabytes movie). Generative AI itself is not really a formal concept, so its pointless to discuss if this is it or not.
The problem of deciding if some data y has been generated by a function f in FO is in the class NP : it can be verified in polynomial time given a witness x (the prompt), and checking if f(x)=y. The proof is straightforward : since f is com…
Published on November 15, 2025 4:03 PM GMT
We’ll first define a complexity class related to generative AI :
Let FO be the class of function over finite strings of symbols which runs in time polynomial of their output length. FO is a class of efficient (polynomial) generators.
This is intended to capture the fact that a short prompt (hundreds of byte) can generate a huge file (like a megabyte picture, or a megabytes movie). Generative AI itself is not really a formal concept, so its pointless to discuss if this is it or not.
The problem of deciding if some data y has been generated by a function f in FO is in the class NP : it can be verified in polynomial time given a witness x (the prompt), and checking if f(x)=y. The proof is straightforward : since f is computable in a time polynomial in the length of its output, formally t = O( |output|^c ), it can only read O(t) many symbols of its input. Hence the witness x is polynomial in y.
Let’s observe that FO is closed by composition : if both f and g are in FO, h = f°g is in too. In practice, this corresponds to asking a LLM to generate a prompt for an image model, then use the image to prompt a 3D mesh generator, for instance.
Old school AI used classifiers. Those allready have a related complexity class : A set (of finite string of symbols) is in P if we can decide in polynomial time if yes or no an object is a member of the set. Otherwise said, a set in P has an efficient binary classifier. They are obviously closed by intersection, since we can do two polynomial-time tests in polynomial-time. Last but not least, the problem of deciding if such a set is empty is Co-NP.
Now consider the problem of finding if 2 efficient generators f,g (in FO) allways produces distincts outputs. This means the inteserction of img(f) and img(g) is empty.
Remark that extending correctly a trace of any computation is in FO. And enumerating traces of computation which halts is also in FO. Proof : let a trace be a finite string of “ symbols {C,H,E} : Computing, Halted and Error_invalid_trace. First, the set of valid terminating Turing Machine computation trace is CHH : any number of Computing steps, followed by at least a Halted state. Second, for all Turing Machine and initial configuration, there exists a FO function which extends the trace by a single symbol : either C if the machine can compute one more steps, H if the next state is Halting, and E if the trace is malformed, the machine halts too early, or computes too many steps without halting. So for any computation, deciding if it halts in finite time is reduced to deciding the non-emptyness of the intersection of the image of 2 FO functions : one to generate all halting traces, one to autocomplete the trace of a particular computation of interest.
As a consequence, since the emptyness of the intersection of sets in Img(FO) is undecidable, a harder problem than Co-NP, the class Img(FO) and P are distinct, and since Img(FO) is in NP, P!=NP.
Discuss