Reflective equilibrium has been proposed as a methodology for logical
theorising and, indeed, as a procedure for justifying our logical
knowledge at least since Goodman’s “new riddle of induction.”
In recent years, interest in it resurged, particularly in the wake of
the advances of the anti-exceptionalist programme in logic. The
general background for this paper will be given by a modest form of
anti-exceptionalism, compatible with logical immanentism—the view that
logic is immanent in language (see e.g. Brandom 2000)—which claims
that the epistemology of logics is fallibilist (see
e.g. Peregrin and Svoboda 2013, 2016, 2017; Read 2000).
In this paper, I will argue against the thesis that reflective
equilibrium is a viable methodology for logical theorising. This
negative thesis does not deny that the phenomenology of logical
inquiry could be described, at least in part, in accordance to the
pattern provided by reflective equilibrium (hereafter often abbreviated
as “RE”). This I gladly grant and duly deplore, for I believe
that, ultimately, it is the plausibility of this way of describing
logical inquiry that is at the core of the misguided tenet that
RE is a meaningful methodology for logic. Instead, my claim is
that the processes normally associated with logical investigations are
too complex, too abstract, and too “theoretical” to be in any
substantive sense guided by RE. I will present my
arguments against reflective equilibrium via three case studies of
currently debated issues among logicians. These vignettes will, I hope,
drive home the following three points:
The first is that logical theorising is systematically biased in
favour of theoretical considerations and so RE is, qua
methodology, too weak.
The second is that RE underdetermines both the
identification of the specific problems one encounters in “the formation
of logics,” i.e. problematisation, and the problem-solving process
itself.
The third and final point I wish to make is that RE
systematically favours weaker logics.
Reflective Equilibrium
So what is reflective equilibrium? In its most exalted sense, it is
the ultimate justification procedure open to some of our beliefs,
including our logical beliefs. In a more modest sense, it is a
methodology in processes like formalisation, theorification, modelling,
etc. These two senses of RE are connected and it takes but a
small (up and ahead) step from the latter to the former. Both are
evident in a celebrated remark of Goodman’s, worth reproducing here
in extenso:
Principles of deductive inference are justified by their conformity
with accepted deductive practice. Their validity depends upon accordance
with the particular deductive inferences we actually make and sanction.
If a rule yields inacceptable inferences, we drop it as invalid.
Justification of general rules thus derives from judgments rejecting or
accepting particular deductive inferences.
This looks flagrantly circular. I have said that deductive inferences
are justified by their conformity to valid general rules, and that
general rules are justified by their conformity to valid inferences. But
this circle is a virtuous one. The point is that rules and particular
inferences alike are justified by being brought into agreement with each
other. A rule is amended if it yields an inference we are unwilling
to accept; an inference is rejected if it violates a rule we are
unwilling to amend. […] [I]n the agreement achieved lies the only
justification needed for either. (1955, 63–64)
Much of what I have to say will target RE qua
methodology. This is because I take it that whatever problems beset it
in this quality, also affect its status as a state that justifies a body
of beliefs: RE is supposed to generate an eponymous doxastic
state in which one’s logical beliefs are justified. But if the process
does not warrant the cogency of its outcomes, then what value can there
be to either? A state of RE may be seen as one where no further
developments of one’s theories is possible because there are no more
apparent problems to resolve. Yet the same situation
could ensue as an effect of lack of curiosity, of having a deficit of
imagination, or low epistemic standards. This kind of epistemic
“tranquillity” is a non-specific symptom. Insofar at is has any value,
this is due to the inherent virtues of the process that lead to it.
So what is this methodology? Goodman’s original description refers
only to inferences, principles of inference and the relation between
them. But we may well suppose that articulating this relation involves a
few more ingredients. So, expanding a bit on the original schematic
proposal, we can easily get a prima facie plausible story that
goes along the following lines: One starts with a body of inchoate,
perhaps practical or intuitive, knowledge of a certain domain—for
instance, that associated with the dispositions to infer manifested in
the daily ratiocinative practice, or even that obtained by a modicum of
reflection on the practice. That is, one starts with the knowledge
expressed in pre- or quasi-theoretical claims like “this argument is
valid,” “that doesn’t follow,” or perhaps even “valid arguments are
truth-preserving,” etc. Call this “1-knowledge.”
This body of pre-theoretical knowledge is apt for further
regimentation, precisification and expansion—by fine-tuning the
conceptual apparatus behind it, by discovering novel, perhaps more
abstract or more general, relations between its objects, by forming new
hypotheses, proving general statements, etc. Thus, one moves from the
knowledge that a particular item is an argument to a general account of
what arguments are, from the belief that valid arguments preserve truth
to beliefs like “valid deductive arguments preserve designated value on
Tarskian models,” etc. Call (all) this “2-knowledge.”
The development and refinement of 2-knowledge—or, in one
word, theorification—proceeds and is kept in check by balancing
it against 1-knowledge. Theoretical pronouncements are measured
against the pre-theoretical knowledge that inspired them in the first
place. For instance, a rather bad putative definition of
argument as “speech in which, out of two given things, a third
follows” is suitably modified upon realising that many (things that are
usually called) arguments have more or less than two premises (given
things) and may well derive a conclusion (third thing) that is, in fact,
identical to (one of) the premise(s).
At the same time, 1-knowledge is, at least potentially,
modifiable in light of 2-knowledge. For instance, it may be
that 1-knowledge does not provide for a distinction between
inductive and deductive arguments (though maybe it could), whereas
2-knowledge does. This theoretical distinction may inform
1-knowledge and we may see hosts of savvy informal reasoners
resorting to it in everyday contexts. Or it may be that
pre-theoretically we are disposed to infer in accordance with a certain
form of argument but, in virtue of general principles of validity
developed as part of 2-knowledge, we come to see that this is
not the case (cf. infra, the discussion of the \(\omega\)-rule for an illustration of this
case.)
Our logical theories and, with them, logical knowledge, are obtained
and justified as a result of this trade-off between pre-theoretical and
theoretical beliefs.
Formalisation and the Formation
of Logics
Goodmanian reflective equilibrium seems to presuppose a
non-conventionalist view of logic. At any rate, it is easier to grasp
the problems of RE if we assume, without loss of generality,
such a view. Recall Carnap’s famous principle of tolerance:
In logic there are no morals. Everyone is at liberty to build his own
logic, i.e. his own form of language, as he wishes. All that is required
of him is that, if he wishes to discuss it, he must state his methods
clearly, and give syntactical rules instead of philosophical arguments.
(1937, sec.
17)
For Carnap, the standard for the success of logics is not the extent
to which they “correspond” to natural language, the medium of human
reasoning, but rather their usefulness relative to the purposes for
which they were designed.
Not so for the view that will provide the background for the present
discussion. On it, the relation between natural language and the logical
formalism must go beyond the latter’s usefulness in analysing the
former. For specificity’s sake, let our underlying view of logic be that
it is obtained via a process of formalisation, understood as “a
kind of extraction […] of logical form” out of natural language (Peregrin and
Svoboda 2016, 4)—see also Peregrin and
Svoboda (2013, 2017).
The image suggested by RE is readily seen to fit some
scenarios of “formalisation” which are marked by but two parameters:
An informal argument like (arg): “Socrates is mortal because all
men are mortal.”
A target logical system (e.g. first-order logic) or perhaps
merely a target logical syntax (e.g. Fregean syntax, by which I mean the
sort of syntax that explicitly features sentential operators and
construes atomic declarative sentences as having function-argument from,
as opposed to, say, subject-predicate form).
Suppose now that we go about formalising (arg) in the Fregean syntax—our
target (tar). We already know its syncategoremata: expressions like
“all,” “some,” the (grammatical) conjunctions “and,” “or,” “if … then,”
etc. We also know, by and large, how to deal with them in (tar). All in
all, we could arrive at the following schematic rendering of (arg):
\[\begin{prooftree}
\AXC{$\forall x Mx$}
\UIC{$Ms$}
\end{prooftree}\]
of which we make sense via a key that says that “\(M\)” stands for mortal, “\(x\)” is a variable ranging over the
extension of “man,” and “\(s\)” an
individual constant, standing for Socrates.
It’s no achievement to see that this is a suboptimal—indeed, plainly
wrong—formalisation of (arg). For one thing, “All men are mortal” was
rendered formally rather dumbly. For instance, man and
mortal were placed in distinct grammatical categories. Not only
is this unpleasantly non-uniform, but it also obscures the predicate
status of man. We would do better to render this premise as
“\(\forall x (Wx\to Mx)\),” with “\(W\)” standing for man and \(x\) ranging over a (generic) class of
objects. (Note that this is already a good step away from the”surface”
grammar of English.) So we get an improved rendering of (arg), namely:
\[\begin{prooftree}
\AXC{$\forall x (Wx\to Mx)$}
\UIC{$Ms$}
\end{prooftree}\]
the validity of which we check in (tar).
Obviously, it is not.
Does this mean that the conclusion of (arg) does not follow logically
from the premise? Well, yes, it does mean that; still, we wouldn’t want
to say that “Socrates is mortal” may be false when “All men are mortal”
is true. In this sense, we would not want to revise our commitment to
(arg). We figure out that we need another premise, “Socrates is a man,”
in order to validate both (arg) and its formalisation.
And so on and so forth: I am not particularly bent on boring the
reader with logical trivia. The salient point is that all this happens
within the confines of a more or less precise target formalism. At this
level, of formalisation, it is quite plausible to see our
endeavours as governed by RE.
The formation of logics, to appropriate a term used by Peregrin and
Svoboda (2016, 2017), is, as it were, the next level of
formalisation-qua-extraction. One obtains a logic by making explicit
(cf. Brandom
1994) and bringing together into a coherent ensemble the
principles governing informal reasoning. No matter how generous our
notion of formalisation is, this is no mere formalisation, as a
few examples will show.
Consider first the case of a working mathematician who believes, in
the first instance, that the \(\omega\)-rule:
\[\begin{prooftree}
\AXC{$P(0)$}
\AXC{$P(1)$}
\AXC{$\dots$}
\AXC{$P(n)$}
\AXC{$\dots$}
\QuinaryInfC{$\forall x (x\in{\mathbb N}\to Px)$}
\end{prooftree}\]
is logically valid.
Subsequently, and in light of various 2-knowledge
beliefs—inference rules are finitary, logic is topic-neutral, “natural
number” does not express a logical property, logicism fails because of
Russell’s paradox, etc.—she changes her mind and decides not only that
the \(\omega\)-rule is not part of
logic, but also that its syntactic structure, and in particular its
infinite number of premises, make it not an inference rule at all.
Take now Peano’s axiom of induction. Its natural formulation involves
quantification over properties: \[\forall P (
P(0) \land \forall n (P(n) \to P(n+1)) \to \forall n P(n) )\] For
various (theoretical) reasons, this kind of formalisation was thought
best to be avoided and first-order logic, in which the quantifiers range
only over individuals, became the norm (for more on this, see Eklund 1996).
The demise of second order formalisms has little to do with what goes on
in natural language, where (apparent) quantification over properties is
certainly present. It was and, to the extent that the controversy is
alive, it still is a matter of deploying heady theoretical
considerations. Languages may carry logics inside
them, but it is still up to the logicians to decide what to bring to the
surface and how.
A third example will also illustrate the fact that, in many cases,
the practice is not at all coherent and it cannot light our way in a
simple fashion. Take the following rules governing a truth predicate
\(T\):
\[\begin{prooftree}
\AXC{$A$}\RightLabel{$T$-I}
\UIC{$T\langle A\rangle$}
\end{prooftree}\]\[\begin{prooftree}
\AXC{$T\langle A\rangle$}\RightLabel{$T$-E}
\UIC{$A$}
\end{prooftree}\]
They seem innocuous enough. But add some equally innocuous reasoning
principles and pick the sentence named by \(\langle A\rangle\) so that it is “This
sentence is false” and all hell breaks loose, i.e. any sentence follows
from any sentence. Deciding how to handle these issues
significantly exceeds what can be reasonably characterised as a process
of formalisation.
Thus, in practice the formation of logics is a rough-going
process of theorification responsible to the pre-formal practice,
informed by it and, allegedly at least, placed under its control to a
certain extent. The process goes beyond simple formalisation and is not
at all unproblematic.
RE is meant to guide us on the righteous path of smoothing
out these asperities and forming a justified logic, by debunking
whatever tensions may arise between 1- and
2-knowledge. Can it really do this? I think not and in the next
three sections, I will explore three cases of current logical debates,
consideration of which will explain why I am sceptical about the
promises of RE.
Case Study no.1: Multiple
Conclusions
Orthodox logical theorising (Dummett 1991;
Steinberger 2011) teaches that an argument has one or more
premises and only one conclusion. In this it is faithful to the
practice, insofar as it appears that natural language arguments have but
one conclusion. At the same time, inferences of the form:
\[\begin{prooftree}
\AXC{$\neg\neg A$}\RightLabel{DNE}
\UIC{$A$}
\end{prooftree}\]
are generally accepted in the daily ratiocinative practice. That is,
one tends to accept inferences by double negation elimination
(DNE).
As it turns out, these pre-theoretical commitments stand in an uneasy
tension, albeit one that needs a rather sophisticated background theory
to surface fully. This background theory is a version of logical
inferentialism, better known as proof-theoretic semantics (Prawitz
1965, 1974; Schröder-Heister 2018; Francez 2015), whose roots can
be traced back to Gentzen (1935). Proof-theoretic
semantics theorists hold that the meaning of the logical operators is
determined by the primitive rules of inference that govern how sentences
in which they feature as principal operators are, respectively,
introduced and eliminated from proofs. These two kinds of rules for an
operator must match; to put it in jargon: they must be in
harmony (Dummett 1991). If harmony does
not obtain, then the operator is illegitimate and so is the inferential
behaviour it sanctions. Moreover, the test for the “match” between the
introduction and elimination rules is syntactic in nature. There must be
a syntactically assessable property the obtaining of which witnesses the
harmonious character of the pairing.
DNE is obviously an elimination rule for negation. The corresponding
introduction rule is the (intuitionistic) reductio ad absurdum:
\[\begin{prooftree}
\AXC{$[A]_{j}$}
\noLine
\UIC{$\vdots$}
\noLine
\UIC{$\neg A$}\RightLabel{iRAA, $j$}
\UIC{$\neg A$}
\end{prooftree}\]
It turns out that these two rules cannot be harmonised if
arguments (and the formal proofs representing them) are
single-conclusion. A familiar, if bitterly contested, account of harmony
has it that a set of introductions and eliminations for a logical
constant is harmonious only if its addition to a proof system is
conservative (Dummett 1991).
That is, to the extent that the addition generates new valid arguments,
then these must involve the novel vocabulary. Famously, Peirce’s law
\[((A\to B)\to A)\to A\] despite
containing only one logical operator, the conditional, is not provable
in intuitionistic logic. A fortiori, it is not provable using
only the rules for the conditional. However, once one adds DNE to
intuitionistic logic—thus ensuring that negation behaves
classically—there is a proof of it. (I leave the construction of the
proof as an exercise for the reader.) It follows from this that
classical negation is not harmonious. The strongest correct rules for
negation are those of intuitionistic logic.
But this holds water only if arguments and the formal proofs
representing them are single-conclusion. Only in this case does
classical negation yield a nonconservative extension of intuitionistic
logic. If multiple conclusions are allowed, classical negation is
conservative and hence harmonious. In such systems there are proofs of
Peirce’s law in the implicational fragment alone:
\[\begin{prooftree}
\AxiomC{[$A$]$_1$}\RightLabel{Weakening}
\UnaryInfC{$A,B$}\RightLabel{$\to$I, 1}
\UnaryInfC{$A, A\to B$}
\AxiomC{[$(A\to B)\to A$]$_2$}\RightLabel{$\to$E}
\BinaryInfC{$A,A$}\RightLabel{C}
\UnaryInfC{$A$}\RightLabel{$\to$I, 2}
\UnaryInfC{$((A\to B)\to A)\to A$}
\end{prooftree}\]
Now let us find our way out of this, guided by RE. Assume
that our background theory, i.e. the commitment to inferentialism and
the account of harmony as conservativeness, is sacrosanct.
The first thing to notice is that the tension we ought to resolve is
not between the pre-formal practice and our theoretical commitments.
Rather, it is a tension within the practice—albeit one that comes to the
fore only against the background of a commitment to a proof-theoretic
account of the meaning of the logical vocabulary.
It seems that in order to even be able to “reflect equilibristically” on
the matter, one must antecedently form some reasonably justified
theoretical beliefs about validity, the structure of proofs, etc. In
other words, one needs (some theory in order) to generate a
tension between 1-knowledge and 2-knowledge.
On the flip side, this picture suggests that revisions that put in
accord the practice with the theory—against the background of its more
abstract pronouncements—are somehow inescapable. Alas, it seems to me
that it also leads to the demise of RE as a
significant methodological constraint in logical theorising: If
we agree that any theory will mutilate in some way some aspects of the
practice to which we would otherwise wish to remain faithful, then it
follows that any and all resolutions of conflicts must, ultimately, do
violence to the practice or, which amounts to the same thing, to
1-knowledge. Note that the assumption made is not at all
surprising, given that theorification presupposes a great deal of
systematisation. In the particular scenario at hand and, consequently,
in all scenarios relevantly analogous to it, it is indeed unavoidable,
since the practice itself is less than coherent.
The moral of the story is that logical facts, as discernible
in the vernacular ratiocinative practice, are fragile.
They are bound to succumb to the pressures exerted by needs peculiar to
theorification or to its perceived benefits. Resolving conflicts is not
so much a matter of finding some equilibrium between the practice and
the theory, as it is a matter of finding a convenient excuse to
obliterate the inconvenient aspects of the practice.
This may appear to blatantly contradict another problem raised with
respect to RE by Woods (2019). Woods, following Wright (1986),
accuses the procedure of suffering irremediably of the problem of “too
many degrees of freedom.” That is, it leaves open too many areas for
revision, mainly with respect to what I have termed here the “background
theory.” In particular, even the beliefs that brought about the conflict
may be subject to revisions. I believe that the contradiction is merely
apparent. I’ve blocked that possibility and kept the background theory
unchangeable precisely in order to avoid the degrees of freedom problem
because I believe that Woods’ diagnosis is correct in the
absence of that assumption. Now we see that even with it RE
fares less than stellarly.
One may argue that this does not go against RE, which does
not require that the resolution of the conflicts be balanced, or “just,”
etc. All that RE requires is that we resolve the tensions
between the practice and the theory, even if, as I have claimed, this
will systematically ensue in the theory gaining the upper hand. But then
it seems that RE, as a methodological requirement, amounts to
little more than the injunction to pay some attention to the
domain one is theorising about. This, of course, is a piece of eminently
reasonable advice. It is also about as useful in guiding our
investigations of that domain as the prophecies of the oracle of Delphi
would be in planning one’s future.
This, then, is the first complaint that I have against the thesis
that RE is a meaningful guide to the formation of logics: that
“real” equilibrium matters little for it, and that the process of
achieving what we may call “internal” equilibrium, is heavily rigged in
favour of theoretical considerations.
Case Study no.2: Which Logic is
This?
I have already mentioned classical logic. Despite its many merits,
few logicians expect classical logic to perform well in the presence of
of paradox-generating vocabulary like vague predicates or transparent
truth. But are they right in thinking this?
Contrary to these common beliefs, an impressive case has been put
forward by Cobreros et al.
(2012, 2013) on behalf of classical logic being able to handle
the aforementioned troublesome vocabulary without degenerating into a
trivial consequence relation (see also Ripley 2012,
2013). To be sure, this is classical logic in a particular and
rather special guise—special enough to give it a name of its own: “\(ST\),” pronounced “strict-tolerant.” Let us
see us how classical logic and \(ST\)
handle the paradoxes and in what sense the latter is classical.
Our starting point is Gentzen’s sequent calculus for classical logic,
\(LK\) (1935). Recall that this contains the
Cut rule:
\[\begin{prooftree}
\AXC{$X:Y,A$}
\AXC{$A,X:Y$}
\BIC{$X:Y$}
\end{prooftree}\]
Now if one were to add e.g. the \(T\)-rules from above to \(LK\), then the system would become trivial:
any conclusion would follow from any premisses. To see this, let \(\lambda\) be a sentence such that \(\lambda \equiv_{df} \neg T\langle \lambda
\rangle\). Thus \(\lambda\) is
the (strengthened) Liar: “This sentence is not true.”
Then we can derive the empty sequent:
\[\begin{prooftree}
\AxiomC{}\RightLabel{Id}
\UnaryInfC{$T\langle \lambda\rangle : T\langle
\lambda\rangle$}\RightLabel{$\neg$-L, $\neg$-R}
\UnaryInfC{$\neg T\langle \lambda\rangle : \neg T\langle
\lambda\rangle$}\RightLabel{df}
\UnaryInfC{$\lambda : \lambda$}\RightLabel{$T$-L}
\UnaryInfC{$T\langle \lambda \rangle : \lambda$}\RightLabel{$\neg$-L}
\UnaryInfC{$ : \neg T\langle \lambda \rangle, \lambda$}\RightLabel{df,
Contraction}
\UnaryInfC{$ : \lambda$}
\AxiomC{}\RightLabel{Id}
\UnaryInfC{$T\langle \lambda\rangle : T\langle
\lambda\rangle$}\RightLabel{$\neg$-L, $\neg$-R}
\UnaryInfC{$\neg T\langle \lambda\rangle : \neg T\langle
\lambda\rangle$}\RightLabel{df}
\UnaryInfC{$\lambda : \lambda$}\RightLabel{$T$-R}
\UnaryInfC{$\lambda : T\langle \lambda\rangle$}\RightLabel{$\neg$-R}
\UnaryInfC{$ \neg T\langle \lambda \rangle, \lambda : $}\RightLabel{df,
Contraction}
\UnaryInfC{$\lambda : $}\RightLabel{Cut}
\BinaryInfC{$ : $}
\end{prooftree}\]
from which in turn \(A:B\) follows
for any \(A,B\) via Weakening.
Gentzen
(1935) proved that Cut is eliminable from \(LK\) in the sense that any derivable \(LK\)-sequent is derivable without using
Cut; hence \(LK\) and its cut-less
variant, \(LK^{-}\), are equivalent in
that they derive the same sequents. Since in the above proof Cut is
essential for deriving the troublesome empty sequent, we have two proof
systems that, although equivalent in the absence of the truth predicate,
behave differently when extended with the rules governing it.
\(LK^{-}\) can be used to formalise
\(ST\),
which has the same valid sequents as classical logic but allows for
non-trivial and conservative extensions with the sort of vocabulary that
generates troubles classically. Semantically, its consequence relation
can be characterised by the strong Kleene valuations (Kleene 1952), given
below for conjunction, disjunction and negation, when \(A\) follows from some premises (bundled in
the set) \(X\) iff, whenever each of
the statements in \(X\) has the value
\(1\), the conclusion \(A\) has a value in \(\{1,\)½\(\}\):
This brings about a wealth of questions of paramount importance for
logical theorising:
Is \(ST\) truly the same logic
as classical logic or are they different logics? And, if the latter, in
what may their difference consist of?
Is transitivity, as encapsulated by Cut, an essential property of
a logic or is it something that we can dispense with?
And, for that matter, just what (kind of) properties are Cut and
similar, sequent-to-sequent, structures?
One thing that seems plain in light of the above discussion is that,
if in deciding what logic we are dealing with we keep track only of
provable sequents (over the usual language of classical logic), then
there is no way to spot the difference between \(ST\) and classical logic. Is there any
(good) reason to so identify logics?
Indeed there is. Sequents are usually construed as
inferences or claims that the formula(e) on the right-hand side
of the symbol “:” follow from the formula(e) on the left-hand
side of that same symbol. Thus \(ST\)
and classical logic have the same logically valid inferences.
But is this enough when it comes to unequivocally determining the
identity of the logic expressed by a formal proof system?
The case of \(ST\) seems to suggest
otherwise. One place where the difference between classical logic and
\(ST\) comes to the fore is in the
sequent-to-sequent rules they validate. \(ST\) loses Cut and many other classically
valid sequent-to-sequent inferences or metainferences
as they have become known in the literature (Barrio, Rosenblatt and
Tajer 2015; Barrio, Pailos and Szmuc 2021). Indeed, it has been
proved (Barrio, Rosenblatt and
Tajer 2015; Dicher and Paoli 2019) that while the valid sequents
of \(ST\) determine classical logic,
its valid metainferences determine the logic of paradox, \(LP\) (cf. Priest 1979).
The \(ST\)-theorists are well aware
and unperturbed by this fact. For them, these metainferences, or rather
the rules they generate, are mere “closure principles” which a
consequence relation may or may not obey (cf. Cobreros et al. 2013).
Alas, whether or not this is the correct way to look at Cut and other
metainferences is a disputed matter. It certainly isn’t the only one.
For instance, Dicher and Paoli (2021) have
argued that a logic is actually an equivalence class determined in a
suitable way by those metainferences that are valid in the following
sense: any valuation that satisfies the premise sequents also satisfies
the conclusion sequents. From this perspective,
\(ST\) is not classical logic, but
rather \(LP\).
So much for \(ST\) and its
properties; now let us return to RE. Suppose that at the end of
a careful process of formalising various natural language arguments we
end up with the class of classically valid sequents as a codification of
the class of valid inferences. Have we thereby also settled the matter
of whether we have formalised classical or strict-tolerant logic? I
believe that we have not and that we have formed our logic
while somehow failing to form an accurate idea of which logic it is. For
that, we need to answer a few more questions: What are we to make of the
loss of Cut and other metainferences in \(ST\)? Or of the fact that \(ST\), unlike classical logic, appears to be
somehow ambiguous between two different consequence relations, the
classical one and that of \(LP\)? These
are central, albeit very abstract, problems in logical theorising and
certainly salient issues in the formation of logics.
Is there any hope that RE can meaningfully guide us when we
set about settling them? At first blush, one may expect that it ought
to: after all, the debate is ultimately a debate over the role and
status of Cut. The scenario, boiling down to deciding whether a
particular (and rather special) metainference rule is valid seems to fit
quite well in the Goodmanian framework. But this deceptively simple
question quickly spirals out of control, becoming an arcane matter about
obscure properties of logical systems and even about how these systems
codify consequence relations. It is not just a case of revising, say,
our concept of consequence such as to allow non-transitive relations to
count as such.
The sort of questions raised by \(ST\) and its designation as “classical”
cannot be answered by following the imperative of reaching an
equilibrium between (intuitively acceptable) inferences one is not
willing to give up and one’s views about which rules of inference ought
to be accepted. Even the framing of the problem exceeds the resources
available within the RE model.
As with problematisation, so with problem-solving.
Reaching a RE underdetermines the issues at hand. To see this,
assume for the sake of the argument that the problem can be meaningfully
framed as a typical Goodmanian problem (and also bracket the many
details at play in the debate around \(ST\)).
What is apparent is that something has to go, either the principle of
inference codified by Cut or the vocabulary that makes it possible to
express Liars, together with its associated inferential resources. Whatever “firm” anchor point the
pre-formal practice might provide us, such as, for instance, the almost
universal acceptance of transitivity as a property of consequence
relations, rather quickly loses its appeal. This inference principle
generates inferences we are unwilling to accept, if we let it
interact with other, equally intuitive, principles such as the \(T\)-rules. Plainly, RE cannot tell
us which way to proceed and what to sacrifice—at least because all the
inference principles at play have a good pre-theoretical hold on us.
This is not incompatible with it being possible to defend one or
another solution. But those solutions and their defences must, of
necessity, rely on something more than doing justice to the pre-formal
intuitions. Moreover, their virtue simply cannot be that they have
balanced our pre-theoretical commitments with our pre-theoretical
practice, for this virtue could be boasted by many rival solutions.
Case Study no. 3: Paraconsistent
Christology and \(FDE\)
Very recently, JC Beall (2019) took to investigating the
so-called fundamental problem of christology (cf. Pawl 2016) in light
of his favourite logic, \(FDE\) or
first-degree entailment. Briefly, the problem is that Patristic
theology consecrates the dual nature, divine and human, of Christ. Being
divine, Christ is immutable; being human, he is mutable. As a god,
Christ is omnipotent; as a human, his powers are limited, etc. Christ,
in other words, is possessed of inconsistent attributes. Of him, it is
true both that “Christ is \(P\)” and
that “Christ is not \(P\),” for a good
number of essential predicates \(P\).
Because contradictions are bad in that they do not further the objective
of achieving rational knowledge of the object that “embodies” them, this
is a problem for christology.
Beall argues that the best solution to this problem is also the
simplest: bite the bullet and accept that Christ is a contradictory
object. That, however, is not really a bad thing. In particular, he
argues, it does not entail that rational theological inquiry about
Christ is impossible. Contradictions may be true of Christ, but they are
not as bad as traditional (Aristotelian, classical, etc.)
logicians took them to be. They can be handled by appropriate logics.
Thus Beall argues that the proper logic for analytic Christology is the
paraconsistent \(FDE\) (Anderson and Belnap
1975; Belnap 1977).
In its most common guise, \(FDE\) is
a four-valued, truth-functional, and structural logic that recognises,
as Beall puts it, a space of logical possibilities that allows a
statement to be true (= 1), false (= 0), both true
and false (= \(b\), a “glut”), and
neither true nor false (= \(n\), a “gap”). The following matrices show
how these mappings can be extended to valuations:
Both 1 and \(b\) are designated
values and a conclusion \(A\) follows
from some premises \(X\) if and only
if, whenever the premises are at least true, the conclusion too is at
least true.
Theological and para-theological considerations aside, I agree with
Beall, at least in the following sense: One’s best hope of achieving a
state of RE between the orthodox patristic determinations of
Christ and one’s logical beliefs is to endorse a paraconsistent logic.
Ceteris paribus, \(FDE\) will
do just marvellously.
But now suppose that one would wish to reject \(FDE\) on account of being too weak: it does
not recognise as valid a great deal many inferences that we have a
“natural” propensity to accept. By the lights of
RE-theorists, this should count against it. But could such
criticism be levelled against \(FDE\)
on the basis of RE considerations? Alas, it is difficult to see
how this could be done. The \(FDE\)
theorist has a very quick way out of this difficulty. All she needs
point out is that the incriminated inference is not logically
valid (after all, it is not \(FDE\)-valid), although it may be valid
within some restricted domain of inquiry, maybe because the predicates
of that domain have some special properties. By \(FDE\) lights, those inferences need not be
rejected simpliciter though they are rejectable as a matter of
logic. While indeed \(FDE\) is very
weak, it can peacefully co-exist with various strictly speaking
non-logical strengthenings of it.
So far, this has nothing to do with Christology, paraconsistent or
otherwise. But suppose that a \(FDE\)
theorist’s main reasons to uphold this logic have to do with it
cohering with her theological beliefs, in particular with her belief
that Christ is an inconsistent object.
One trying to dislodge \(FDE\) as an
(all-purpose) logic would be in quite a pickle. It seems clear that one
could not move the \(FDE\) theorist to
change her view. Indeed, why would she do so? Not only would this
require that she give up a state of RE, but it would require
her to do so despite having a very handy way of retaining it,
i.e. denying the logicality of the \(FDE\)-invalid inferences while admitting
that they are domain-limited valid (or perhaps analytical, etc.). At the
limit, such a logician may even claim that \(FDE\) is too weak for every other
domain but Christology. This is by no means an irrational claim, despite
the seeming exoticism of the preoccupation with the divine nature in
this age. And it would certainly help her
continue being in the state towards which our theorising must strive,
that of RE.
There is nothing wrong with this in either the present or in any
particular case whatsoever. The problem is that this is a pervasive
trend: Setting a state of RE as the ultimate justification for
our logical beliefs will tend to render weak logics immune to criticism.
Quite simply, it seems very unlikely that an \(FDE\)-opponent of the kind described will
ever be in as good a state of (reflective) equilibrium as an \(FDE\)-champion. The \(FDE\) theorist can be in equilibrium with
respect to their mathematical, logical, theological and in particular
Chalcedonian, and whatnot beliefs. And, presumably, a trivialist who
believes that there are no logically valid arguments, can do
even better.
This is a pathological condition to the extent that it means that
weaker logics will systematically have a better chance of being
justified by RE, simply because RE is easier to obtain
for such a logic. Worse, given the role and purpose of RE,
there is little incentive to aim for stronger logics.
One may reply that this is not so: A weaker logic means
sacrificing—as far as logic is concerned—some inferences which we are
generally willing to accept. But both the practice and other logical
considerations may press exactly for their acceptance qua
logically valid. That is true. But to the extent that these
considerations are forced upon us by the practice, then, as we have
already seen, they are easily brushed aside. The tendency to accept a
given inference says nothing as to whether the inference is logically
valid, restrictedly logically valid, analytically valid and so on. It is
something that needs to be integrated and explained within a bigger
theoretical picture. (So we reach again to our old conclusion that
(seemingly) logical facts are fragile.) If, on the other hand, the
aforementioned considerations are of a theoretical nature, then the
justification process itself does not appear to be one whose stake is
the successful or coherent integration of pre-theoretical beliefs with
theoretical ones. Rather, it appears to be a game of making the best
case for one’s theoretical conviction. There can be no doubt that doing
justice to the “facts” will be part of this process; it is just
implausible that it will be the dominant part.
Epilogue
These, then, are the main problems with RE as a guide to
logical theorising: First, theoretical considerations appear to always
be able to undercut whatever tendencies may exist in the pre-formal
practice. This means that understood as a methodology, RE is
too weak because one of the “reflecting” surfaces itself is too weak.
Second, I have argued that this methodology underdetermines both the
identification of the specific problems one may encounter in “the
formation of logics,” i.e. problematisation, and the problem-solving
process itself. Finally, RE systematically favours weaker
logics. The weaker a logic is, the easier it will be to bring its
prescriptions into harmony with other beliefs we may hold.
Part of the drama of reflective equilibrium is that it appears to fit
parts of the (empirical) process of theorification, in particular,
formalisation. There is little reason to doubt that the process of
theorification starts by working on some raw materials—real inferences,
made by real people in the real world. It also seems to me that it is
correct to say that the processing of these data is both kept in check
by the data and informs them in its turn. This much is inescapable
insofar as we take logic to be an applied theory, i.e. our theory of
correct reasoning (Priest 2006, ch.8).
That, however, does not make RE a plausible methodological
constraint on, and even less so an appropriate account of the
justification of, theorification—not when the chips are down. So, while
the Goodmanian image with which we have started is tempting enough,
turning it into a successful recipe for logical theorising turns out to
be a hopeless job.
At the fringe, reflective equilibrium becomes what the Senate and the
consulate were in imperial Rome. One pays lip service to them. One uses
them for ritual purposes. Every now and then one looks to them for
(very) rough guidance to avoid too extravagant errors. And that’s about
it. The real power lies with the pretorians: the highly disciplined,
highly skilled, and utterly unscrupulous theoretical considerations.
Postscript
Despite having reached the end of the story, the paper must go,
because an anonymous referee asked the most important question to which
I did not wish to answer here: “What are the viable
alternatives?”.
I stand by my decision not to answer this question here, because I
cannot do it justice within the space of this paper. Still, a few words,
gesturing towards my favoured answer, may be useful.
Let this be my starting point: I have framed reflective equilibrium
as a method embodying a fallibilist epistemology of logic. My criticism
of RE did not concern the suggestion that logical inquiry is
fallible, that we can be wrong in our identification of the “laws of
logic,” etc. Nor did I challenge the claim that (parts) of the processes
of logical theorisation and theorification can be described as
proceeding according to a successive series of revisions of the “theory”
in light of the “data” and conversely. What I have challenged is the
claim that this can be turned into a substantive methodological
requirement that would ensue in a justified logical theory. To that extent, I do not wish to
endorse fully an apriorist epistemology of logic.
These are the standard (or at least traditional) options in the
epistemology of logic. I incline towards a different viewpoint. Thus the
answer to the question “What is the best methodology for logical
inquiry?” requires a preliminary answer to a deeper question, about how
we should think about logic. As for the answer to this last question,
Allo (2017, 546)
puts it best:
[I]t makes sense to think of logic as a kind of cognitive technology:
a tool or set of tools used to reason more efficiently. The proposal to
see logic as conceptual technology extends the scope of this picture,
and emphasises that all the core notions that logical systems give a
formal account of (like validity, consistency, possibility, and perhaps
even meaning) should be understood as artefacts that shape deductive
reasoning practices rather than as neutral descriptions or codifications
of pre-existing inferential practices.
So the referee’s question “What are the viable alternatives?” has a
simple but hardly informative answer: Whatever methodology best serves
the imperative of developing the best cognitive technology that logic
can be. What that actually means is a matter for further thinking.