Introduction
Nonmonotonic logic (abbreviated as NML) and its domain, defeasible reasoning, are multifaceted areas. In crafting an Element that serves as both an introduction and an overview, we must adopt a specific perspective to ensure coherence and systematic coverage. It is, however, in the nature of illuminating a scenario with a spotlight that certain aspects emerge prominently, while others recede into shadow. The focus of this Element is on unveiling the core ideas and concepts underlying NML. Rather than exhaustively presenting concrete logics from existing literature, we emphasize three fundamental methods: (i) formal argumentation, (ii) consistent accumulation, and (iii) semantic approaches.
An argumentative approach for understanding human reasoning has bee…
Introduction
Nonmonotonic logic (abbreviated as NML) and its domain, defeasible reasoning, are multifaceted areas. In crafting an Element that serves as both an introduction and an overview, we must adopt a specific perspective to ensure coherence and systematic coverage. It is, however, in the nature of illuminating a scenario with a spotlight that certain aspects emerge prominently, while others recede into shadow. The focus of this Element is on unveiling the core ideas and concepts underlying NML. Rather than exhaustively presenting concrete logics from existing literature, we emphasize three fundamental methods: (i) formal argumentation, (ii) consistent accumulation, and (iii) semantic approaches.
An argumentative approach for understanding human reasoning has been proposed both in a philosophical context by Toulmin’s forceful attack on formal logic in Reference Toulmin1958, and more recently in cognitive science by Mercier and Sperber (Reference Mercier and Sperber2011). Pioneers such as Pollock (Reference Pollock1991) and Dung (Reference Dung1995) have provided the foundation for a rich family of systems of formal argumentation.
Consistent accumulation methods are based on the idea that an agent facing possibly conflicting and not fully reliable information is well advised to reason on the basis of only a consistent part of that information. The agent could start with certain information and then stepwise add merely plausible information. In this way they stepwise accumulate a consistent foundation to reason with. Accumulation methods cover, for instance, Reiter’s influential default logic (Reiter, Reference Reiter1980) or methods based on maximal consistent sets, such as early logics by Rescher and Manor (Reference Rescher and Manor1970) and (constrained) input–output logic (Makinson & Van Der Torre, Reference Makinson and Van Der Torre2001).
While the previous two methods are largely based on syntactic or proof-theoretic considerations, interpretation plays the essential role in semantic approaches. The core idea is to order interpretations with respect to normality considerations and then to select sufficiently normal ones. These are used to determine the consequences of a reasoning process or to give meaning to nonmonotonic conditionals. The idea surfaces in the history of NML in many places, among others in Batens (Reference Batens1986), Gelfond and Lifschitz (Reference Gelfond and Lifschitz1988), Kraus et al. (Reference Kraus, Lehman and Magidor1990), McCarthy (Reference McCarthy1980), and Shoham (Reference Shoham and Ginsberg1987).
A central aspect of this Element is its unifying perspective (inspired by works such as Bochman (Reference Bochman2005) and Makinson (Reference Makinson2005). Defeasible reasoning gives rise to a variety of formal models based on different assumptions and approaches. Comparing these approaches can be difficult. The Element presents several translations between NMLs, illustrating that in many cases the same inferences can be validated in terms of diverse formal methods. These translations offer numerous benefits. They enrich our understanding by offering different perspectives: the same underlying inference mechanism may be considered as a form of (formal) argumentation, a way of reasoning with interpretations that are ordered with respect to their plausibility, or as a way of accumulating and reasoning with consistent subsets of a possibly inconsistent knowledge base. They demonstrate the robustness of the underlying inference mechanism, since several intuitive methods give rise to the same result. While the different methodological strands of NML have often been developed with little cross-fertilization, it is remarkable that the resulting systems can often be related with relative ease. Finally, the translations may convince the reader that, despite the fact that the field of NML seems a bit of a rag rug at first sight, there is quite some coherence when taking a deeper dive. In particular, by showcasing formal argumentation’s exceptional ability to represent other NMLs, this Element adds further evidence to the fruitfulness of Dung’s program of utilizing formal argumentation as a unifying perspective on defeasible reasoning (Dung, Reference Dung1995).
The Element is organized in four parts. Part I provides a general introduction to the topic of defeasible reasoning and NML. The three core methods are each introduced in a nutshell. It provides a condensed and self-contained overview of the fundamentals of NML for readers with limited time. Part II to IV deepen on each of the respective methods by providing metatheoretic insights and presenting concrete systems from the literature.
While some short metaproofs that contribute to an improved understanding are left in the body of the Element, two technical appendices are provided for others. In particular, results marked with ‘⋆’ are proven in the appendices.
Many important aspects and systems of NML didn’t get the spotlight and fell victim to the trade-off between systematicity and scope from which an introductory Element of this length will necessarily suffer. Nevertheless, with this Element a reader will grow the wings necessary to maneuver in the lands of nonflying birds, that is, they will be well equipped to understand, say, first-order versions of logics that are discussed here on the propositional level, or systems such as autoepistemic logic.
Part I Logics for Defeasible Reasoning
1 Defeasible Reasoning
1.1 What is Defeasible Reasoning?
We certainly want more than we can get by deduction from our evidence. … So real inference, the inference we need for the conduct of life, must be nonmonotonic.
Henry Kyburg, Reference Kyburg2001.
This Element introduces logical models of defeasible reasoning, so-called NonMonotonic Logics (in short, NMLs). When we reason, we make inferences, that is, we draw conclusions from some given information or basic assumptions. Whenever we reserve the possibility to retract some inferences upon acquiring more information, we reason defeasibly.Footnote 1 Two paradigmatic examples of defeasible inferences are:
| Assumption | Defeasible conclusion | Reason for retraction |
| The streets are wet. | It rained. | The streets have been cleaned. |
| Tweety is a bird. | Tweety can fly. | Tweety is a penguin. |
As the examples highlight, we often reason defeasibly if our available information is incomplete: we lack knowledge of what happened before we observed the wet streets, or we lack knowledge of what kind of bird Tweety is. Defeasible inferences often add new information to our assumptions: while being explanatory of the streets being wet, the fact that it rained is not contained in the fact that the streets are wet, and while being able to fly is a typical property of birds, being a bird does not necessitate being able to fly. In this sense defeasible inferences are ampliative.
Logics that may lose conclusions once more information is acquired are called nonmonotonic. The vast majority of logics the reader will typically encounter in logic textbooks are monotonic, with classical logic (in short, CL) being the celebrity. Whenever the given assumptions are true, an inference sanctioned by CL will securely pass the torch from the assumptions to the conclusion, warranting with absolute certainty the truth of the conclusion. Truth is preserved in inferences sanctioned by CL. No matter how much information we add, how many inferences we chain between our premises and our final conclusion, or how often the torch is passed, truth endures: the flames reach their final destination. Thus, inferences are never retracted in CL, and conclusions accumulate the more assumptions we add. This property, called monotonicity, is highly desirable for certain domains of reasoning such as mathematics, a domain where CL reigns.
However, a key motivation behind the development of NML is that out in the wild of commonsense, expert, or scientific reasoning, good inferences need not be truth preservational: we often change our minds and retract inferences when watching a crime show and wondering who the most likely murderer is; medical doctors may change their diagnosis with the arrival of more evidence, and so do scientists, sometimes resulting in scientific revolutions. In less idealized circumstances than those of purely formal sciences (such as mathematics), we usually need to reason with incomplete, sometimes even conflicting information. As a consequence, our inferences allow for exceptions and/or criticism. They are adaptable: learning or inferring more information may cause retraction, previous inferences may get defeated. Outside the ivory tower of mathematics, in the stormy domain of commonsense reasoning, the torch’s fire may get extinguished.
It is therefore not surprising that examples of defeasible reasoning are abundant. In what follows, we will list some paradigmatic examples.
Example 1. We first imagine a scenario at a student party.Footnote 2
Table 001

Table 001Long description
The example shows five lines of dialogue set at a student party. 1. Peter: I haven’t seen Ruth!. 2. Mary: Me neither. If there’s an exam the next day Ruth studies late in the library. 3. Peter: Yes, that’s it. The logic exam tomorrow!. 4. Anne: But today is Sunday. Isn’t the library closed question mark. 5. Peter: True, and indeed there she is, exclamation mark. [pointing to Ruth entering the room].
In her reply to Peter’s observation concerning Ruth’s absence (1), Mary states a regularity in form of a conditional (2): If there’s an exam the next day, Ruth studies late in the library. She offers an explanation as to why Ruth is not around. The explanation is hypothetical, since she doesn’t offer any insights as to whether there is an exam. Peter supplements the information that, indeed, (3) there is an exam. Were our students to treat information (2) and (3) in the manner of CL as a material implication, they would be able to apply modus ponens to infer that Ruth is currently studying late in the library.Footnote 3 And, indeed, after utterance (3) it is quite reasonable for Mary and Peter to conclude that
(⋆) Ruth is not at the party since she’s studying late at the library.
Anne’s statement (4) casts doubt on the argument (⋆), since the library might be closed today. This does not undermine the regularity stated by Mary, but it points to a possible exception. Anne’s statement may lead to the retraction of (⋆), which is further confirmed when Peter finally sees Ruth (5): this is defeasible reasoning in action!
Defaults. Statements such as “Birds fly.” allow for exceptions. It is therefore not surprising that one of the most frequent characters in papers on NML is Tweety. While the reader may sensibly infer that Tweety can fly when they are told that Tweety is a bird, they might be skeptical when being informed that Tweety lives at the South Pole, and most definitely will retract the inference as soon as they hear that Tweety is a penguin.Footnote 4 As we have also seen in our example, we often express regularities in the form of conditionals – so-called default rules, or simply defaults – that hold typically, mostly, plausibly, and so on, but not necessarily.
Closed-World Assumption. Often, defeasible reasoning is rooted in the fact that communication practices are based on an economic use of information. When making lists such as menus at restaurants or timetables at railway stations, we typically only state positive information. We interpret (and compile) such lists under the assumption that what is not listed is not the case. For instance, if a meal or connection is not listed, we consider it not to be available. This practice is called the closed-world assumption (Reiter, Reference Reiter1981).
Rules with Explicit Exceptions. Before presenting more examples of defeasible reasoning, let us halt for a moment to address a possible objection. Is CL really inadequate as a model of this kind of reasoning? Can’t we simply express all possible exceptions as additional premises? For instance,
(†) If there’s an exam the next day and the library is open late and Ruth is not ill and on her way didn’t get into a traffic jam and …, then Ruth studies late in the library.
There are several problems with this proposal. The first concerns the open-ended nature of the list of exceptions which characterizes most rules that express what typically/usually/plausibly/and so on holds. Even in the (rare) cases in which it is – in principle – possible to compile a complete list of exceptions, the resulting conditional will not adequately represent a reasoning scenario in which our agent may not be aware of all possible exceptions. They may merely be aware of the possibility of exceptions and be able, if asked for it, to list some (such as penguins as nonflying birds). Others may escape them (such as kiwis), but they would readily retract their inference that Tweety flies after learning that Tweety is a kiwi. In other words, the complexities involved in generating explicit lists of exceptions are typically far beyond the capacities of real-life and artificial agents. What is more, in order to apply modus ponens to conditionals such as (†), our reasoner would have to first check for whether each possible exception holds. This may be impossible for some, for others unfeasible, and altogether it would render out of reach the pace of reasoning that is needed to cope with their real-life situation.
In contrast to reasoning from fixed sets of axioms in mathematics, commonsense reasoning needs to cope with incomplete (and possibly conflicting) information. In order to get off the ground, it (a) jumps to conclusions based on regularities that allow for exceptions and (b) adapts to potential problems in the form of exceptional circumstances on the fly, by means of the retraction of previous inferences.
Abductive Inferences. Another type of defeasible reasoning concerns cases in which we infer explanations of a given state of affairs (also called abductive inferences). For instance, upon seeing the wet street in front of her apartment, Susie may infer that it rained, since this explains the wetness of the streets. However, when Mike informs her that the streets have been cleaned a few minutes ago, she will retract her inference. We see this kind of inference often in diagnosis and investigative reasoning (think of Sherlock Holmes or a scientist wondering how to interpret the outcome of an experiment). As both the exciting histories of the sciences and the twisted narratives of Sir Arthur Conan Doyle reveal, abductive inference is defeasible.
Inductive Generalizations. In both scientific and everyday reasoning, we frequently rely on inductive generalizations. Having seen only white swans, a child may infer that all swans are white, only to retract the inference during a walk in the park when a black swan crosses their path.
These are some central, but far from the only types of defeasible inferences. A more exhaustive and systematic overview can be found, for instance, in Walton et al. (Reference Walton, Reed and Macagno2008), where they are informally analyzed in terms of argument schemes.Footnote 5
1.2 Challenges to Models of Defeasible Reasoning
Formal models of defeasible reasoning face various challenges. Let us highlight some.
1.2.1 Human Reasoning and the Richness of Natural Language
As we have seen, defeasible reasoning is prevalent in contexts in which agents are equipped with incomplete and uncertain information. By providing models of defeasible reasoning, NMLs are of interest to both philosophers investigating the rationality underlying human reasoning and computer scientists interested in the understanding and construction of artificially intelligent agents. Human reasoning has a peculiar status in both investigations in that selected instances of it serve as role models of rational and successful artificial reasoning. After all, humans are equipped with a highly sophisticated cognitive system that has evolutionarily adapted to an environment of which it only has incomplete and uncertain information. Therefore, it seems quite reasonable to assume that we can learn a good deal about defeasible reasoning, including the question of what is good defeasible inference, by observing human inference practices.
There are, however, several complications that come with the paradigmatic status of human defeasible reasoning. First, human reasoning is error-prone, which means we have to rely on selected instances of good reasoning. But what are exemplars of good reasoning? In view of this problem, very often nonmonotonic logicians simply rely on their own intuitions. There are good reasons why one should not let expert intuition be the last word on the issue. We may be worried, for instance, about the danger of myside bias (also known as confirmation bias; see Mercier and Sperber (Reference Mercier and Sperber2011)): intuitions may be biased toward satisfying properties of the formal system that is proposed by the respective scholar.
Then, there is the possibility of “déformation professionnelle,” given that the expert’s intuitions have been fostered in the context of a set of paradigmatic examples about penguins with the name Tweety, ex-US presidents (see Examples 2 and 3), and the like.Footnote 6
Another complication is the multifaceted character of defeasible reasoning in human reasoning. First, there is the variety of ways we can express in natural language regularities that allow for exceptions. We have “Birds fly,” “Birds typically fly,” “Birds stereotypically fly,” “Most birds fly,” and so on, none of which are synonymous: for example, while tigers stereotypically live in the wild, most tigers live in captivity. What is more important, the different formulations may give rise to different permissible inferences. Consider the generic “Lions have manes.” While having a mane implies being a male lion, “Lions are males” is not acceptable (Pelletier & Elio, Reference Pelletier and Elio1997). The inference pattern blocked is known as right weakening: if A
by default implies B
, and C
follows classically from B
, then C
follows by default from A
as well. It is valid in most NMLs, and it seems adequate for the “typical,” “stereotypical,” and “most” reading of default rules, but not for some generics.Footnote 7 For NMLs this poses the challenge to keep in mind the intended interpretation of defaults and differences in the underlying logical properties that various interpretations give rise to.
Despite these problems, it seems clear that “reasoning in the wild” should play a role in the validation and formation of NMLs.Footnote 8 This pushes NML in proximity to psychology. In practice, nonmonotonic logicians try to strike a good balance by obtaining metatheoretically well-behaved formal systems that are to some degree intuitively and descriptively adequate relative to (selected) human reasoning practices.
1.2.2 Conflicts and Consequences
Defeasible arguments frequently conflict. This poses a challenge for normative theories of defeasible reasoning, which must specify the conditions under which inferences remain permissible in such scenarios.
For this discussion, some terminology and notation will be useful. An argument (in our technical sense) is obtained by either stating basic assumptions or by applying inference rules to the conclusions of other arguments. An argument is defeasible if it contains a defeasible rule (such as a default), symbolized by ⇒
. Such an argument may include also truth-preservational strict inference rules (such as the ones from CL), symbolized by →
. A conflict between two arguments arises if they lead to contradictory conclusions A
and ¬A
(where ¬
denotes negation).
Let us now take a look at two paradigmatic examples.
Example 2 Nixon; Reiter and Criscuolo (1981). One of the most well-known examples in NML is the Nixon Diamond (see Fig. 1):Footnote 9
1. Nixon is a Dove.
Nixon→Dove
2. Nixon is a Quaker.
Nixon→Quaker
3. By default, Doves are Pacifists.
Dove⇒Pacifist
4. By default, Quakers are not Pacifists.
Quaker⇒¬Pacifist
Given the conflict between the arguments Nixon→Dove⇒Pacifist
and Nixon→Quaker⇒¬Pacifist
, should we conclude that Nixon is (not) a pacifist? It seems an agnostic stance is recommended in this example.
Example 3 (Tweety; Doyle and McDermott (1980)). Another well-known example is Tweety the penguin (see Fig. 2) based on the following information:
1. Tweety is a penguin.
Tweety→penguin
2. Penguins are birds.
penguin→bird
3. By default, birds fly.
bird⇒fly
4. By default, penguins don’t fly.
penguin⇒¬fly
We use the example to demonstrate a way to resolve conflicts among defeasible arguments, here between (a) Tweety→penguin→bird⇒fly
and (b) Tweety→penguin⇒¬fly
. According to the specificity principle more specific defaults such as penguin⇒¬fly
are prioritized over less specific ones, such as bird⇒fly
. The reason is that more specific defaults may express exceptions to the more general ones. So, in this example the preferred outcome ¬fly
will be obtained since the less specific defeasible argument (a) should be retracted in favor of (b).

Figure 1 The Nixon Diamond from Example 2. Double arrows symbolize defeasible rules, single arrows strict rules, and wavy arrows conflicts. Black nodes represent unproblematic conclusions, while light nodes represent problematic conclusions. Rectangular nodes represent the starting point of the reasoning process. We use the same symbolism in the following figures.

Figure 2 Tweety and specificity, Example 3.
Figure 2Long description
A single arrow from penguin leads to bird (black node). A double arrow from penguin leads to ¬ fly (black node) and from bird leads to fly (light node). A way arrow is drawn between fly and ¬fly.
Our examples indicate that, first, conflicts between defeasible arguments can occur, and second, the context may determine whether and, if so, how a conflict can be resolved. We now take a look at two further challenges that come with conflicts in defeasible reasoning.
Figure 3 encodes the following information: A⇒B
, A⇒C
, A
, and ¬B
. Should we infer C
? Nonmonotonic logics that block this inference have been said to suffer from the drowning problem (Benferhat et al., Reference Benferhat, Cayrol, Dubois, Lang and Prade1993). Examples like the following seem to suggest that we should accept C
.

Figure 3 A drowning scenario.
Example 4. We consider the scenario:
1. Micky is a dog.
Micky→A
2. Dogs normally (have the ability to) to tag along with a jogger.
A⇒B
3. Dogs normally (have the ability to) bark.
A⇒C
4. Micky lost a leg and can’t tag along with a jogger.
Micky→¬B
In this example it seems reasonable to infer, C
, Micky has the ability to bark, despite the presence of ¬B
. In other contexts one may be more cautious when jumping to a conclusion.
Example 5. Take the following scenario.
1. It is night.
A
2. During the night, the light in the living room is usually off.
A⇒B
3. During the night, the heating in the living room is usually off.
A⇒C
4. The light in the living room is on.
¬B
In this scenario it seems less intuitive to infer, C
, The heating in the living room is off. The fact that we have in (4) an exception to default (2) may have an explanation in the light of which also default (3) is excepted. For example, the inhabitant forgot to check the living room before going to sleep, she is not at home and left the light and heating on before leaving, she is still in the living room, and so on.
These examples show that concrete reasoning scenarios often contain a variety of relevant factors that influence what real-life reasoners take to be intuitive conclusions. Specific NMLs typically only model a few of these factors and omit others. For instance, although Elio and Pelletier (Reference Elio and Pelletier1994) and Koons (Reference Koons and Zalta2017) argue that it is useful to track causal and explanatory relations in the context of drowning problems, systematic research in this direction is lacking.
Another class of difficult scenarios has to do with so-called floating conclusions.Footnote 10
These are conclusions that follow from two opposing arguments. For example, formally the scenario may be as depicted in Fig. 4.

Figure 4 A scenario with the floating conclusion C
.
Example 6. Suppose two generally reliable weather reports:
1. Station 1: The hurricane will hit Louisiana and spare Alabama.
A1⇒B1
2. Station 2: The hurricane will hit Alabama and spare Louisiana.
A2⇒B2
3. If the hurricane hits Louisiana, it hits the South coast.
B1→C
4. If the hurricane hits Alabama, it hits the South coast.
B2→C
The floating conclusion, (5), The storm will probably hit the South coast, may seem acceptable to a cautious reasoner. The rationale being that both reports agree on the upcoming storm and even roughly where it will hit. The disagreement may be due to different weighing of diverse factors in their respective underlying scientific weather models. But the combined evidence of both stations seems to rather confirm conclusion (5) than dis-confirm it. This is not always the case with partially conflicting expert statements, as the next example shows.
Example 7. Assume two expert reviewers, Reviewer 1 and Reviewer 2, evaluating Anne for a professorship. She sent in two manuscripts, A and B.
1. Reviewer 1: Manuscript A is highly original, while manuscript B repeats arguments already known in the literature.
A1⇒B1
2. According to Reviewer 1, one manuscript is highly original.
B1→C
3. Reviewer 2: Manuscript B is highly original, while manuscript A repeats arguments already known in the literature.
A2⇒B2
(We assume the inconsistency of B1
with B2
.)
4. According to Reviewer 2, one manuscript is highly original.
B2→C
Should we conclude that one manuscript is highly original, since it follows from both reviewers’ evaluations? It seems a more cautious stance is advisable. The disagreement may well be an indication of the sub-optimality of each of the two reviews. Indeed, a possible explanation of their conflicting assessments could be that (a) Reviewer 1 is aware of an earlier article B′
(by another author than Anne) that already makes the arguments presented in B
and which is not known to Reviewer 2, and vice versa, that (b) Reviewer 2 is aware of an earlier article A′
in which similar arguments to those in A
are presented. In view of this possibility, it would seem overly optimistic to infer that Anne has a highly original article in her repertoire.
2 Central Concepts
Nonmonotonic logics are designed to answer the question what are (defeasible) consequences of some available set of information. This gives rise to the notion of a nonmonotonic consequence relation. In this section we explain this central concept and some of its properties from an abstract perspective (Section 2.2). Nonmonotonic consequences are obtained by means of defeasible inferences, which are themselves obtained by applying inference rules. We discuss two ways of formalizing such rules in Section 2.3. Before doing so, we discuss some basic notation in Section 2.1.
2.1 Notation and Basic Formal Concepts
Let us get more formal. We assume that sentences are expressed in a (formal) language L
. We denote the standard connectives in the usual way: ¬
(negation), ∧
(conjunction), ∨
(disjunction), ⊃
(implication), and ≡
(equivalence). We use lowercase letters p,q,s,t,…
as propositional atoms, collected in the set Atoms
, and uppercase letters A,B,C,D,…
as metavariables for sentences such as p
, p∧q
or (p∨q)⊃r
. We denote the set of sentences underlying L
by sentL
. In the context of classical propositional logic and typically in the context of a Tarski logic (see later), this will simply be the closure of the atoms under the standard connectives.Footnote 11 We denote sets of sentences by the uppercase calligraphic letters A
, S
, and T
. Where S
is a finite nonempty set of sentences, we write ⋀S
and ⋁S
for the conjunction resp. the disjunction over the elements of S
.Footnote 12
A consequence relation, denoted by ⊢
, is a relation ⊢
between sets of sentences and sentences: S⊢A
denotes that A
is a ⊢
-consequence of the assumption set S
. So, the right side of ⊢
encodes the given information resp. the assumptions on which the reasoning process is based, while the left side encodes the consequences which are sanctioned by ⊢
given S
.
We will often work in the context of Tarski logics L
, whose consequence relations ⊢L
are reflexive (S∪{A}⊢LA
), transitive (S⊢LA
and S∪{A}⊢LB
implies S⊢LB
) and monotonic (Definition 2.1). We will also assume compactness (if S⊢LA
then there is a finite S′⊆S
for which S′⊢LA
). The most well-known Tarski logic is, of course, classical logic CL
.
2.2 An Abstract View on Nonmonotonic Consequence
The following definition introduces one of our key concepts: nonmonotonic consequence relations.
Definition 2.1. A consequence relation ⊢
is monotonic iff (“if and only if”) for all sets of sentences S
and T
and every sentence A
it holds that S∪T⊢A
if S⊢A
. It is nonmonotonic iff it is not monotonic.
We use |∼
as a placeholder for nonmonotonic consequence relations. Our definition expresses that for nonmonotonic consequence relations |∼
there are sets of sentences S∪{A}
and T
for which S|∼A
while S∪T̸|∼A
(i.e., A
is not a |∼
-consequence of S∪T
).
In the following we will introduce some properties that are often discussed as desiderata for nonmonotonic consequence relations.Footnote 13 A positive account of what kind of logical behavior to expect from these relations is particularly important given the fact that ‘nonmonotonicity’ only expresses a negative property. This immediately raises the question whether there are restricted forms of monotonicity that one would expect to hold even in the context of defeasible reasoning? One proposal is
Cautious Monotonicity (CM).
S∪{B}|∼A
, if S|∼A
and S|∼B
.Footnote 14
Whereas nonmonotonicity expresses that adding new information to one’s assumptions may lead to the retraction resp. the defeat of previously inferred conclusions, CM states that some type of information is safe to add: namely, adding a previously inferred conclusion does not lead to the loss of conclusions.
We sketch the underlying rationale. Suppose S|∼A
and S|∼B
. In view of S|∼A
, the defeasible consequence A
of S
is sanctioned. So, S
does not contain defeating info