- Taxonomy:Cybernetics
Tangled Hierarchies and Resonant Phase Spaces for Intelligent Systems
Research
Authors
- Josephine Kaleida
Abstract
This paper takes Denizhan’s cybernetic definition of intelligence as a “border activity between the modelled and unmodelled” as a basis to present the case that a system which relates to a particular kind of phase space, called resonant phase space, is fundamentally crucial for an intelligent border activity to manifest as a phenomenon. The thrust of the argument comes from the Hofstadter’s concept of “tangled hierarchies” and “strange loops.” The argument follows the theoretical lineages of second-order cybernetics, dynamic systems theory, and Rosennean complexity theory. The …
- Taxonomy:Cybernetics
Tangled Hierarchies and Resonant Phase Spaces for Intelligent Systems
Research
Authors
- Josephine Kaleida
Abstract
This paper takes Denizhan’s cybernetic definition of intelligence as a “border activity between the modelled and unmodelled” as a basis to present the case that a system which relates to a particular kind of phase space, called resonant phase space, is fundamentally crucial for an intelligent border activity to manifest as a phenomenon. The thrust of the argument comes from the Hofstadter’s concept of “tangled hierarchies” and “strange loops.” The argument follows the theoretical lineages of second-order cybernetics, dynamic systems theory, and Rosennean complexity theory. The corollary of the statement is that intelligence, as Denizhan understands it, is computationally intractable in an ontology of algorithmic systems because of static phase space.
Keywords:
- [intelligent systems](https://enacting-cybernetics.org/en/search?q=intelligent systems)
- [resonant phase space](https://enacting-cybernetics.org/en/search?q=resonant phase space)
- [ontological expansion](https://enacting-cybernetics.org/en/search?q=ontological expansion)
- [second order cybernetics](https://enacting-cybernetics.org/en/search?q=second order cybernetics)
- [tangled hierarchies](https://enacting-cybernetics.org/en/search?q=tangled hierarchies)
Submitted on Sep 11, 2024
Accepted on Jan 24, 2025
Published on Nov 13, 2025
Peer Reviewed
Metrics
Click on the tabs below to view various metrics for this article.
Questions about the conceptual status of intelligence and a general theory of intelligence are pertinent and unavoidable today, especially with the rise of so called artificial intelligence or more accurately automated optimisation systems as Yagmur Denizhan (2023) has referred to them. Rodney Brooks (2002), former director of MIT’s Computer Science and Artificial Intelligence Laboratory, commented in an interview that “maybe there’s more to us than computation. Maybe there’s something beyond computation in the sense that we don’t understand and we can’t describe what’s going on inside living systems using computation only” (Beyond Computation section, para. 3). There is merit in such a speculation considering that research groups at MIT have failed to figure out behaviour of a nematode despite knowing all of its neurology and AI approaches such as connectionist models “just abstract too far away from the physical properties of the nervous system” (Chomsky, 1993, p. 86). What kind of information could it be that is missing from the picture? In this article, I present my development of a categorical distinction of intelligent systems from non-intelligent systems based on the work of Denizhan.
It is evident that there is not a generally agreed upon definition of intelligence which can serve as a conceptual “razor” and therefore the arguments often speak past one another. In this article, I present a definition of intelligence based on the concept of the ontological expansion of a system. This definition can fulfil the function of a razor, serving to provide clarifications to the ambiguities created in its absence. This distinction will be founded on the different kind of phase spaces an intelligent system operates in as opposed to a non-intelligent system.
Epistemic Omniscience and Deductive Closure
For the classical notion of Church-Turing computation which encloses a phase space T, the machine receives input and undergoes a sequence of state transitions and yields a generic output within the same phase space. In this form, the operations in T are feedforward procedures that follow predefined formalizations of given algorithms called programs. During the computation process, an operation in such a machine disregards everything besides the specified input and the corresponding piece of the program until the initial input is processed (Negarestani, 2018, p. 344). A given operation in T can have many substrate mechanisms that fulfil it, and a mechanistic system which can exhaust all operations in T is often called Turing-complete.
Hence, Negarestani (2018) asks “if this is all that computation is—algorithmic deduction—then what exactly is gained by it?” (p. 344). He dubs it the riddle of epistemic omniscience, according to which, the total knowledge of such an agent can be said to be deductively closed. Clearly, to make the claim that any operation in any system is necessarily equivalent to algorithmic deduction in T, i.e. T = N, where N is space of general operations in natural systems, it implies there is no new information at any point in the future which is not deducible in every point in the past and “while such a verdict is absolutely sound and valid in some classical logic heaven, it has no ground in reality” (Negarestani, 2018, p. 344).
Therefore, I claim that T is not exhaustive in N, i.e. the space of Turing computation is not exhaustive in space of general computations in natural systems. The irreducibility of computational complexity of N to T is similar in its nature to the fact that there are many more differential equations than functions expressible with elementary expressions, therefore most differential equations cannot be reduced to an elementary form. As entailments can be drawn about anything that one can ask a coherent question about, we have a much larger class N of computations than T, where instead the space of questions is limited to those which one can ask in the formal language of a Turing machine but not questions, for instance, of meta-formality without necessitating a meta-formalism. Therefore, in this paper, I argue that T ⊂ N.
Defining Intelligence via Ontological Expansion
Denizhan (2023) has presented a cybernetic definition of intelligence as a border activity between the regimes of the modelled and the unmodelled. She defines intelligence as characteristic of an agent that can traverse between a passive or conservative mode “A” of the modelled regime where all problems are in some sense trivial or reducible to an algorithm (and which, therefore, can also be called an intensional regime) and an active or creative mode “B” of unmodelled regime where problems are novel and it is impossible to tell, by pre-defined criteria, what is relevant for the new required model and what is not relevant as noise (which, therefore, can also be called an extensional regime).
This is the fundamental framing problem with the latter as both heuristic and associationist algorithms fail to reliably select the correct frame for a contextual problem (Fodor, 2001) without a predetermined and preprogrammed ontology of the problem. Such problems require context-specific models “founded on the ground of reality which has risen in the course of biological evolution through embodied structures and processes, and keeps rising during the lifetime of the Agent towards increasingly abstract mental models” (Denizhan, 2023, p. 31). Cognitive systems enter this mode when some part of their internal meta-model is destabilized (when the involved models fail to serve their purpose due to accumulating change in the unmodelled parameters or the emergence of an entirely novel situation), creating the need and opportunity for a restructuring of the meta-model.
The restructuring of the model can only happen when information is received in an extensional regime, which means the information is not yet loaded with relevance values. This mode terminates with the discovery of a new resolution and order, leading to “an expansion of the Modelled into previously unmodelled realms of reality” (Denizhan, 2023, p. 33). It should be noted here that, it is impossible to completely model all possible parameters involved in environmental processes, trivializing the distinction between the two modes discussed, as has been argued several times (Wolpert, 2007; Rosen, 1985; Longo et al., 2012).
Mode A refers to routine and low-level cognitive processes which follow rather invariant procedures that are driven by sensory inputs and operate within a predefined ontology that can be described with reasonable accuracy in terms of the information-processing paradigm. However, mode A confines the agent’s (internal and external) activities to a predefined ontology that cannot account for creativity nor can it explain how a model has emerged in the first place. Mode B, by contrast, is precisely when a context-specific model is destabilized, which if not addressed, can leave the agent debilitated in her actions in the respective context. The characteristics of the distinction between modes is represented by Denizhan as per Figure 1.
Figure 1
“Two modes of Cognitive processes and Transitions between them” (Denizhan, 2023).
Therefore, operational mode B provides opportunity for restructuring of the assortment of context-specific models, which Denizhan refers to as edifice of knowing. This restructuring demands ontological expansion, i.e. “an expansion of the Modelled into previously unmodelled realms of reality” (Denizhan, 2023, p. 33).
Denizhan (2023) takes a non-reductionist approach to distinguishing intelligence on the basis of “the capacity of achieving ontological expansion via internal restructuring of the Edifice of Knowing” (p. 35). This implies that the creative mode B necessitates an active dynamic conception of the internal metamodel called the edifice of knowing “that evolves under the pressure of conflict resolution” (p. 33) and that an authentic simulation of such a creative mode would require some form of partial or complete representation of the dynamics of a destabilised edifice of knowing.
Denizhan (2023) notes that the “typical attempts of simulating ontological openness include random search in a huge (but still predefined) space, or random recombination of predefined building blocks into arbitrarily complex hierarchical organisations” (p. 34). But since phase space of such attempts is predefined, the models cannot be said to have “expanded” their ontology in a responsive and dynamic sense, and their ontology is therefore, still restricted to the conservative mode A.
In light of these considerations, one can see how the aforementioned definition of intelligence has several advantages, one of them being that it excludes routine-based low-level processes that follow predefined stepwise procedures or specifically, algorithms. An algorithmic procedure, no matter how sophisticated, elaborate or complicated in its operation, is not qualified for Denizhan’s definition of intelligence as it fundamentally operates within the confines of a given ontology or phase space predefined by all the possible permutations of the allowed operations on all the possible inputs in that domain.
The intelligence of animals, for instance, transcends this limitation as they can re-structure and re-organize their model to interact with a system of any given kind based on previous interactions which may violate initial model expectations. Such interactive creativity is carried out routinely by animals in a given situation. Regardless of what the animal’s original comprehension of the situation (ontology) is, an animal can give basis to a new ontology based on its experience. Because the definition in terms of ontological expansion includes intelligence of humans and non-human animals as well as subcategories of intelligence, such as analytical or emotional intelligence, and even the ability to invent procedural algorithms, while excluding technological systems that passively rely on algorithms regardless of their performance, I wager that, it deserves being called a conceptual razor.
It can also be said here that the unmodelled regime is the space of extensional sets where all information is in its least salient form, as no higher level patterns have been recognized that can be used to reduce the relevance frame for the problem or goal at hand in a given system. While the modelled regime is the space of intensional sets where the extensive set is no longer needed to be specified but only described or defined in terms of its salient feature or patterns recognized by an agent, which may be relevant to the problem or goal at hand. The same extensional set can yield different intensional sets in different problem spaces, where problem space is defined by the teleology of the agent traversing it.
A set such as {(1, 1), (2, 4), (3, 9), …} is an extensional set whereas {(x, y) | x, y ∈ N & y = x2} is an intensional set, as the latter allows recognition of how all pairs share a common property which can be used to extend the domain and graph the function as required. It is such a correlation between extensional and intensional sets in relation to a system that is here understood as an ontology, which remains fixed in a simple mechanical system such as a coin flip or transistors where their mechanical structure corresponds to a static phase space of heads and tails, or one and zero, respectively.
Denizhan’s notion of intelligence allows intelligence to be understood as a property of a system that can convert an extensional set into an intensional set, or in other words, produce a teleological relevance frame for the problem. This process of producing a relevance frame is what is usually termed as “insight” into a problem, which is not expected from static pre-programmed mechanisms. Here, one can see the necessity of a functionalist approach. Functionalism considers the function of a differential entity (an entity whose presence makes a difference) in the system. It acknowledges that certain outcomes are desirable with respect to their function while others are side-effects and may be unwarranted or undesirable or even dangerous to the system.
Constraints and Instructional Information
The purpose of a formalization is to reduce an epistemic datum to a proposition which can be derived from a set of fixed axioms by a process of stepwise algorithmic procedure (Rosen, 1991). However, this requires specification of the phase space and boundary conditions, which is not possible for general systems, including biological systems (Longo et al., 2012). Given the fact that computation is typically directly associated with the concept of Turing computation, the aforementioned scenario seems to preclude the possibility of a discussion of life in computational terms. While it is true that Turing computation is not an adequate formalization to enclose the space of living systems within its domain, this does not preclude a larger class of computation which does so.
Kauffman et al. (2008) introduce the notion of instructional information which they identify with constraint or boundary conditions in a system. This notion of information is fundamentally functional rather than statistical:
The working of a cell is, in part, a complex web of constraints, or boundary conditions, which partially direct or cause the events which happen. Importantly, the propagating organization in the cell is the structural union of constraints as instructional information, the constrained release of energy as work, the use of work in the construction of copies of information, the use of work in the construction of other structures, and the construction of further constraints as instructional information. This instructional information further constrains the further release of energy in diverse specific ways, all of which propagates organization of process that completes a closure of tasks whereby the cell reproduces. (p. 37)
This gives a strong motivation towards conceptualizing a notion of computation that is not constructed out of an abstract and discrete procedural data manipulation on some material substrate but rather in the processing of instructional information as identified with functional constraints of the substrate itself. As Deutsch (2012) explains, due to the prevalent essentialist conception of computational proof of a process “if the laws of physics were such that a physical system existed whose motion could be exploited to establish the truth of a Turing-unprovable proposition, the proposition would still not count as having been genuinely proved” (p. 15). Under this classical framing, proofs are only considered legitimate when they are carried out in an abstract space free of contingencies rather than an embedded space where the process is actually enacted.
However, the notion of instructional information provides theoretical foundation for a new conception of computational proof which involves not symbolic manipulation but implications drawn from a physical process, i.e. material implications (Kampis, 1991). Robert Rosen (1991, pp. 270–275) presents an example of the protein folding problem, which regardless of its intractability in T is known to be effective via material implications, which is a far more efficient computational process.
Resonance and Dynamism in Phase Spaces
The phase space of a system depends on the substrate properties (such as the chemical properties of neurotransmitters or the physical properties of transistors) and its structural configuration (such as the hierarchization and specialization of the nervous system or the organization of the control unit, the algorithmic logic unit, the central processing unit etc.) which enclose a space of possibilities of how the system can behave or what outputs it can produce. As long as the substrate maintains its properties and structure, all the operations in the system have a fixed phase space.
But this is precisely what produces the problem of epistemic omniscience and deductive closure as discussed. As you may notice that from the two sets of examples given above, the traditional computer system and the nervous system, only the former has fixed substrate properties and structural configuration throughout. A nervous system responds to its environment actively and processes it via persistent self-reconfiguration (Kercel, 2004).
This implies that the phase space of a nervous system is open ended and dynamic with respect to environmental interaction it is embodied and embedded in. It is not possible to find a closed system that includes the nervous system as a static phase space system because the environment (in itself a dynamic system) of the nervous system has its own dynamic environs and so on, indefinitely. This renders a problem for any attempt at mechanization of such a system, namely the “unprestatability” (Longo et al., 2012) of the system. A system with no closed bounds and a dynamic phase space cannot have an explicit representation of itself as a mechanism as it is perpetually undergoing co-evolution.
One way to render this idea formally transparent is that whatever differential equations with a set of determining parameters will be derived to determine the evolving system, they will require their own meta-differential equations to determine the evolving parameters and meta-meta-differential equations to determine the evolving meta-parameters, and so on. Therefore, one can say that neither the environment nor the system is explicable as a model with a blueprint of its possible operations.
A living system can be said to have a resonant phase space as it is in resonance with its environment. Any influence from the environment, given that it is significant enough to set the system significantly off course from its homeostasis or autopoiesis but not so dramatically as to destroy the system’s consistency, will be adjusted according to higher order cybernetic principles of control (El-Samad, 2021), anticipation (Rosen, 1985) and ontological expansion (Denizhan, 2023). But how is this resonance achieved?
Tangled Hierarchies and Strange Loops
How is it that a system can update its own ontology or phase space? How can a system have such read-and-write access over its own code? Hofstadter (1999) addressed precisely this problem, in his discussion of the notion of feedback loops. The feedback loops that Hofstadter discusses are not the ordinary kind familiar from thermostats or neurotransmitter inhibition, the subject of study of cybernetic control theory, but something he referred to as strange loops characterized by tangled hierarchy of semiosis. Heinz von Foerster (2003), independently of Hofstadter’s work during the 1970s, developed the theory of second order cybernetics to address the same problem in terms of the double closure of information flow. Rosen (1991, 1999) later also developed a version of complexity theory to address this problem in terms of the self-referentiality or impredicativity of the system, where closure of efficient causality is invoked in place of tangled hierarchy, but achieves the same fundamental insight.
Ordinary feedback loops work on a same semiotic level where, for instance, in case of a thermostat, a deviating decrease in temperature triggers a corresponding increase in heat supply, or excessive increase in a neurotransmitter’s transmission triggers a corresponding inhibition process. This rectification of error, which is inherent to negative feedback systems, is the principle of error: as error increases, the response to rectify the error also increases.
However, strange loops work across semiotic levels, referred to as level crossing (Hofstadter, 2007, p. 102), where a process affects its own medium or substrate in a way that phase space for the process is no longer the same. Consider von Foerster’s (2003) second order cybernetics, which uses feedbacks over the control loop itself, such as where an engineer makes sure the thermostat is working properly and by testing it to make sure it behaves as intended. Here, it is the principle of error applied upon itself. Is the principle of error producing error? Or, is the system deviating from itself (its intended design)?
In a more bizarre form, one may consider a character from a story by an author being the author of another story in which the author of first story himself exists, such that both authors are only each other’s creations. Such a scenario is only possible when both of the authors are characters in a meta-story by a third author who has imagined this scenario in the first place. But a very similar situation is found in nature more often than one may expect. The chicken or egg problem is not very different in its premise. Hofstadter explains how DNA transcription for protein production works to demonstrate a striking instance of a strange loop.
Since DNA contains all the information for construction of proteins, which are the active agents of the cell, DNA can be viewed as a program written in a higher-level language, which is subsequently translated (or interpreted) into the “machine language” of the cell (proteins). On the other hand, DNA is itself a passive molecule which undergoes manipulation at the hands of various kinds of enzymes; in this sense, a DNA molecule is exactly like a long piece of data, as well. Thirdly, DNA contains the templates off of which the tRNA “flashcards” are rubbed, which means that DNA also contains the definition of its own higher-level language. (Hofstadter, 1999, p. 547)
Hofstadter makes very similar arguments from the standpoint of proteins, ribosomes and tRNA as well. My argument here is that these strange loops are the foundation of the resonant phase spaces of systems that can dynamically co-evolve alongside their environment, seemingly in response to it, as if they were in a dialogue with one another. Another way to imagine the scenario is to think of a game where the players’ moves not only dictate the outcome of the game (which of them will survive) but also the rules of the game. So, with each step, not only does the likelihood of each player’s survival evolve but also the phase space of their moves.
Dialogic Ludics
Hofstadter (1999) does explore this possibility with the example of a version of chess where moves dictate the rules and rules dictate the moves in a tangled hierarchy. However, for the purposes of this article, I take my cue from Jean-Yves Girard’s ludics, as discussed in this light by Negarestani (2018) with a slight shift in the terminology from previous example: instead of games, there is dialogue; instead of rules, syntax; and instead of moves, semantics.
For ludics, I am interested in the dialogue as a resonant process of interactive computational process in which rules (syntax) and meanings (semantics) spontaneously emerge throughout the course of the game or the dialogue. For a given dialogue, the conversation progresses via role switching between interpretation and response functions for speech acts of assertion and questioning. These players, in an imaginary dialogue exchange utterances as speech acts in a pragmatic context upon which themes of further utterances can develop and grow into syntactic and semantic ideas in relation to each other, i.e. syntax derives eigenfunctions from semantics and semantics derives eigenvalues from syntax (the concept of eigenvalue/eigenfunction comes from recursive function theory; von Foerster, 1976). A very similar approach was already hinted at, in Brier (1996), as “languaging.”
If instead of recursive non-ergodic processes, one relies on an ergodic osmosis of information, any such structure of meaning embedded in syntax-semantic correspondences would have taken many orders of lifetimes of the universe to crystallize. Therefore, in practical terms, only non-ergodic processes can yield ontological expansion of a sufficiently complex system. It is the autopoietic survivalist nature of “the game” which prefigures contextualization and topicalization in a dialogue which selects for survival. This corresponds to Niklas Luhmann’s theory of social-communicative systems where, as Brier (1996) notes, “communication does not transmit meaning but rather requires it as given and as forming a shared background against which informative surprises may be articulated” (p. 239). It should be noted that the partner in the dialogue can be the biosphere itself. Ludics serves as a formal description of Denizhan’s mode B.
The test is the interaction between speech acts and context. The impact of the speech act on the interaction is in terms of the updating, modification, or erasing of a shared context (Negarestani, 2018, p. 372). In ludics, different interacting strategies are tested not against a preestablished model, but against one another. In the process, the rules of logic emerge from the confrontation of strategies which interact as players in a game rather than propositions. “Ludics shows that the continuity between syntax and semantics is naturally achieved through an interactive stance toward syntax in its most atomic and naked appearance: the trace of the sign’s occurrence, the locus or place of its inscription” (Negarestani, 2018, p. 366).
The way Negarestani avoids the riddle of epistemic omniscience is by way of embedding his computational model of ludics in an epistemically relevant environment, not just as a background of classical computation. This implies that the environment dynamically and actively interacts with the system and determines its processual trajectory and therefore “it is only in the presence of an environment (another system, machine, or agent) that computation can be understood as an increase (or decrease)—rather than mere preservation—of information” (Negarestani, 2018, p. 345).
Abstraction Buffer
One may note that systems with dynamic phase spaces, whether they are living and intelligent or not, seem to be the rule rather than exception. Afterall a system which can avoid the influence of its own environment must be very specialized and particular such that ignoring environmental forces is feasible. Particularly, I argue that if it is abstract, as in non-material and conceptual with only one-sided determinability with its substrate, it would necessarily have a static and fixed phase space.
For instance, writing on a piece of paper follows a monodirectional determinability of semiosis, i.e. writing on the paper can divergently determine the interpretation, but the interpretation cannot divergently determine the writing on the paper. Irrespective of the interpretation and their impact on the reader, the ink blots remain where they were and the “phase space” of the parchment as a writing device, taken in isolation, remains unchanged.
Similarly, a computer device, which is an instantiation of a Turing machine, is a static phase space system. While one can program any kind of software onto a computer, which does have an unprestatable phase space, when you consider the device itself, it remains in that static phase space with the new software and cannot update it on its own without interacting with external beings. The material body of a computer, like a parchment, is subject to environmental forces, but the abstract Turing machine it manifests, and its phase space, remains entirely unaffected by such forces.
Any instance of “self-organizing” or “self-assembling” technologies based on ordinary control loops might be brought up as a counter-argument but such devices unjustifiably take on such a title as they only “self-organize” within their pre-defined phase space for which they were designed in the first place. They cannot modify their “outer most” control loop as they are not in tangled hierarchies. Hence, they are not ontologically independent in the sense an autopoietic system can be said to be. Their ontological identity is parasitic upon top-down blueprints designed by independent autopoietic systems.
This is significant because it implies that no dialogic ludic loops can be found here. There is no strange loop or tangled hierarchy in these objects and they cannot sustain any operational independence from their environment as closed systems. The reason it is so is because of what we may call an unbridged abstraction buffer.
To render this clearly, I must discuss abstraction as a process. To abstract means, to take away concrete context from a contingent phenomenon to reduce it to a non-contingent semiotic token of the actual instantiation of the phenomenon. Such a semiotic token retains a selection of qualities or relational correspondences in accordance with criteria of abstraction, selected according to the pragmatic purposes of the model-making edifice. One example is how the Turing machine is an abstraction of a computer, which means that what processes are undertaken in a Turing machine, can be observed in a computer but what affects a computer does not necessarily affect a Turing machine.
Abstraction buffer precisely refers to this monodirectionality of determinability observed in such cases as a computer or a writing device like paper. Their abstract counterparts depend on them for their ontology but not the other way around. It is a strict hierarchy, rather than a tangled one. And in such strictly hierarchical systems, one cannot expect the resonance of phase spaces, and hence, an emergence of dynamic intelligence.
The fundamental distinction between a resonant phase space system and a dissonant phase space system can be based on the concept of an abstraction buffer as such. A resonant phase space does not inhere any abstraction buffer as such, whereas for a dissonant phase space such an abstraction buffer is fundamental. This implies that any kind of operation in the abstract space which corresponds to a system can have a divergent influence on its substrate leading to an irreducible self-referentiality in the system.
Circular Causality in Intelligent Systems
The name of the field “cybernetics” comes from the Greek word kybernḗtēs for helmsperson in sense of one at the helm of steering a ship. In steering a ship, the helmsperson adjusts their steering in continual response to the effect it is observed as having, forming a feedback loop through which a steady course can be maintained in a changing environment, responding to disturbances from currents, winds and tide. This particular example of a cybernetic feedback loop is pregnant with the notion of teleology or “finality.” This finality as the forward orientation of operationality is fundamental to all cybernetic systems in some way, be it inherently or extraneously. Notably, Kauffman et al. (2008) remark that “our language is teleological. We believe that autonomous agents constitute the minimal physical system to which teleological language rightly applies” (p. 30). Aristotle distinguished four kinds of causes, the material cause, the efficient cause, the formal cause and the final cause, the final cause being the only one which succeeds its effect and thereby, in a way, violates the temporal directionality of deterministic or “reactive” causation. This finality becomes the basis for anticipatory systems in Rosen (1985) as a divergence from the Newtonian reactive paradigm.
Von Foerster (2003) points out the relevance of the notion of Aristotelian final cause to cybernetics by going so far as to say that we are all cyberneticians “whenever we justify our actions … with the phrase in English ‘in order to …,’ which in French is much more Aristotelian, ‘à fin de …’” (p. 298). He also points out, that for a process with closed circular causality, “the cause for an effect in the present can be found in the past if one cuts the circle at one spot, and that the cause lies in the future if one does the cutting at the diametrically opposed spot”. By closing the causal chain, he not only puts effective and final causality in a “dialogue,” but also gets rid of the uncertainty of boundary conditions: as the end conditions themselves constitute the starting conditions. This is where the eigenvalue problem of these eigenfunctions emerges as “only certain values of those conditions provide a solution for the processes within the circle” (von Foerster, 2003).
It can be deduced from this that for an intelligent system that has its independent teleological goals or aspirations which are not pre-processed by an external agent but rather have an internal locus of origin, in such a way to constitute its individuated will, it would require a circular causality in the form of its tangled hierarchies which can perpetually modify themselves with context and perspective shift.
Conclusion
In this essay, we have covered a continuous line of argumentation following from the insights provided by various brilliant scholars and polymaths such as Denizhan, Girard, Hofstadter, Rosen, von Foerster, Kauffman, and Longo in order to critically understand the nature of intelligence as belonging to a particular kind of complexity for which we may consider a particular class of systems. This class of systems must adhere to a property of resonant phase space, in that their semiotic expression does not find itself in a static space of possibilities but rather in a dynamic space which updates itself in response to a dialogue with the environment. This condition is referred to as a condition of being in a tangled hierarchy or strange loop of different semiotic levels in a system that inform each other in a closed loop, and hence achieve a level of impredicativity necessary for system-wide self-awareness.
Acknowledgements
This is to thank the Enacting Cybernetics journal editor Ben Sweeting for their understanding, help and guidance during the entire process of preparing the final draft of this article and publishing. I would also like to thank Prof. Yagmur Denizhan for her academic contributions which inspired this work, her support and guidance as well as gracious permission to use figure(s) from her work in my article. Final thanks to Prajna for her persistence in standing by me in confidence in my every venture undertaken.
Competing Interests
The author has no competing interests to declare.
References
Brier, S. (1996). From second-order cybernetics to cybersemiotics: A semiotic re-entry into the second-order cybernetics of Heinz von Foerster. Systems Research, 13(3), 229–244. https://doi.org/10.1002/(SICI)1099-1735(199609)13:3<229::AID-SRES96>3.0.CO;2-B
Brooks, R. (2002, June 5). Beyond computation: A talk with Rodney Brooks. Edge. http://www.edge.org/3rd_culture/brooks_beyond/beyond_index.html
Chomsky, N. (1993). Language and thought. Moyer Bell.
Denizhan, Y. (2023). Intelligence as a border activity between the modelled and the unmodelled. Angelaki, 28(3), 25–37. https://doi.org/10.1080/0969725X.2023.2216542
Deutsch, D. (2012). Constructor theory. ArXiv. https://doi.org/10.48550/arXiv.1210.7439
El-Samad, H. (2021). Biological feedback control—Respect the loops. Cell systems, 12(6), 477–487. https://doi.org/10.1016/j.cels.2021.05.004
Fodor, J. A. (2001). The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology. MIT Press. https://doi.org/10.7551/mitpress/4627.001.0001
Hofstadter, D. (1999). Gödel, Escher, Bach: An eternal golden braid. Basic Books.
Hofstadter, D. (2007). I am a strange loop. Basic Books.
Kampis, G. (1991). Self-modifying systems in biology and cognitive science: A new framework for dynamic. Pergamon Press.
Kauffman, S., Logan, R. K., Este, R., Goebel, R., Hobill, D., & Shmulevich, I. (2008). Propagating organization: An enquiry. Biology & Philosophy, 23(1), 27–45. https://doi.org/10.1007/s10539-007-9066-x
Kercel, S. W. (2004). The Endogenous Brain. Journal of Integrative Neuroscience, 3(1), 61–84. https://doi.org/10.1142/S0219635204000385
Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. ArXiv. https://doi.org/10.1145/2330784.2330946
Negarestani, R. (2018). Intelligence and spirit. Urbanomic; Sequence Press.
Rosen, R. (1985). Anticipatory systems: Philosophical, mathematical and methodological foundations. Pergamon Press.
Rosen, R. (1991). Life itself: A comprehensive inquiry into the nature, origin, and fabrication of life. Columbia University Press.
Rosen, R. (1999). Essays on life itself. Columbia University Press.
von Foerster, H. (1976). Objects: Tokens for (Eigen-)Behaviors. ASC Cybernetics Forum, 8(3–4), 91–96.
von Foerster, H. (2003). Understanding understanding: Essays on cybernetics and cognition. Springer. https://doi.org/10.1007/b97451
Wolpert, D. H. (2007). Physical limits of inference. ArXiv. https://doi.org/10.48550/arXiv.0708.1362