Two recent arguments draw startling and puzzling conclusions about relations and 2nd-order logic (2OL). The first argument concludes that 2nd-order quantifiers can’t be interpreted as ranging over relations. This conclusion is puzzling because it calls into question the traditional understanding of 2OL as a formalism for quantifying over relations. The second argument, which concludes that unwelcome consequences arise if relations and relatedness are analyzed rather than taken as primitive, utilizes premises that imply that 2OL faces the very same consequences. This is puzzling because relations and predication are taken as primitive in 2OL, and so the latter should be immune to the problems raised for the analysis of relations. I consider these two arguments in light of a precise theory of relations. In particular, I show that object theory [@zalta:1983; -@zalta:1988], which is an extension of 2OL, provides systematic existence and identity conditions for relations, properties, and states of affairs that forestall the two arguments.
1 Setting Up the Problems
I take relations to be a fundamental kind of entity, and in this paper I investigate some of the principles needed to characterize them. Recently, philosophers have raised puzzling questions about converse and non-symmetric relations and about the states of affairs in which they play a role (Williamson 1985; Dorr 2004). In addressing these and other questions, some philosophers and philosophical logicians have attempted to analyze relations and the manner in which they relate. Such analyses, which sometimes appeal to other fundamental notions, raise questions of their own, such as whether or not there are positions (argument places, slots, or thematic roles) in a relation (Fine 2000; Gilmore 2013; Dixon 2018; and Orilia 2014, 2019); what it is for the relata to bear or stand in a relation; and whether there is an order of application or a manner of completion that connects relations and their relata.
In this paper, however, I take the notions of relation and relation application (i.e., predication) to be so fundamental that they can’t be further analyzed and so must instead be axiomatized. This starting point is analogous to that of the mathematics of set theory—the notions of set and set membership are considered so fundamental that the best we can do is axiomatize them. As with set theory, an axiomatic theory of relations has to state, at the very least, conditions under which the entities being axiomatized exist and conditions under which they are identical. In what follows, I’ll reprise just such a theory. It was first proposed in 1983 and was couched in a relatively simple extension of second-order logic (“2OL”). The resulting system gives us the framework we need to address the most important questions that have been raised about relations, including some of the questions that arise when relations are analyzed.
My defense of relations is focused on two recent arguments that draw rather puzzling conclusions for relations considered as primitive, axiomatized entities. The first argument appears in a recent paper by MacBride (2022, 1), where he concludes, by way of a dilemma, that “we cannot interpret second-order quantifiers as ranging over relations.” MacBride is not claiming that relations don’t exist or that some other (e.g., ontologically more neutral) interpretation of 2nd-order quantifiers is to be preferred, but rather that 2nd-order quantifiers can’t be interpreted unproblematically as ranging over relations.1 This conclusion is startling because it calls into question the traditional understanding of 2OL as a formalism for quantifying over relations. Philosophers and logicians since Russell have supposed that relational statements of natural language of the form “\(a\) loves \(b\),” “\(a\) gives \(b\) to \(c\),” etc., can be uniformly rendered in the predicate calculus as statements of the form \(Ra_{1}\ldots a_{n}\), where \(Ra_{1}\ldots a_{n}\) expresses the claim that \(a_{1},\ldots,a_{n}\) exemplify (or stand in or instantiate) \(R\). For example, in his description of 2OL, Väänänen (2019, sec. 2) notes that “[t]he intuitive meaning of \(X(t_{1},\ldots,t_{n})\) is that the elements \(t_{1},\ldots,t_{n}\) are in the relation \(X\) or are predicated by \(X\).” So it is puzzling to be informed that when we existentially generalize on the statement “\(Ra_{1}\ldots a_{n}\)” to derive the claim “\(\exists F(Fa_{1}\ldots a_{n})\),” we can’t regard this latter claim as quantifying over relations.
The second argument and puzzling conclusion appear in MacBride (2014). On the one hand, MacBride argues that relations, predication (relation application), and relatedness should be taken as primitive (2014, 1, 2, 15), on the grounds that any analysis leads to unwelcome consequences. On the other hand, the unwelcome consequences he describes for the analysis of relations are already present in 2OL with identity (2OL\(^{=}\)), where relations and predication are primitive. He endorses the primitive nature of relatedness when he writes:
I will argue that the capacity of a non-symmetric relation \(R\) to apply to the objects \(a\) and \(b\) it relates so that \(aRb\) rather than \(bRa\) must be taken as ultimate and irreducible. […] It’s a familiar thought that we cannot account for the fact that one thing bears a relation \(R\) to another by appealing to a further relation relating \(R\) to them—that way Bradley’s regress beckons. To avoid the regress we must recognize that a relation is not related to the things it relates, however language may mislead us to think otherwise. We simply have to accept as primitive, in the sense that it cannot be further explained, the fact that one thing bears a relation to another [citations omitted]. But it is not only the fact that one thing bears a (non-symmetric) relation \(R\) to another that needs to be recognized as ultimate and irreducible. How \(R\) applies—whether the \(aRb\) way or the \(bRa\) way—needs to be taken as primitive too. (MacBride 2014, 2, italics in original)
While this seems correct, the argument that MacBride gives for this conclusion ensnares 2OL\(^{=}\), where relatedness is primitive. His argument revolves around the following claim (Russell 1903, sec. 218–219):2
(1) Every (binary) non-symmetric relation \(R\) has a converse \(R^{*}\) that is distinct from \(R\).
MacBride argues that any analysis of relations and relation application that endorses (1) gives rise to “unwelcome consequences,” namely (a) a multiplicity of converse relations3 and (b) “the profusion of states that arise from the application of these relations” (2014, 4). Consequence (a) is puzzling because 2OL\(^{=}\), in which relations, predication, and relatedness are primitive, has a formal representation of (1) as a theorem. So it seems we face a multiplicity of relations no matter whether we endorse (1) by way of an analysis or by way of 2OL\(^{=}\). As part of our investigation, we’ll also examine consequence (b) and MacBride’s conclusion that there is no good analysis of the identity and distinctness of states of affairs. He says:
What vexes the understanding is […] an analysis of the fundamental fact that \(aRb \neq bRa\) for non-symmetric \(R\). […] Anyone who wishes to give an analysis of the fact that \(aRb \neq bRa\) faces a dilemma. […] Since neither […] [of the] analyses are satisfactory, this recommends our taking the fact that \(aRb \neq bRa\) to be primitive. (MacBride 2014, 8, italics in original)
[The full quote is provided later in the paper.] When we examine this (second) dilemma, we’ll see that there is an analysis that is immune to the dilemma and that MacBride doesn’t consider. One can unproblematically analyze the identity of states of affairs within a theory on which the fact that a state of affairs obtains is primitive.
My plan is as follows. In section 2, I lay out the first puzzling argument and conclusion, i.e., the dilemma used to establish that the 2nd-order quantifiers don’t range over relations. The argument begins by suggesting that if they do, then pairs of converse predicates either refer to the same relation or they don’t. Each disjunct leads to a horn of the dilemma. I then spend the remainder of section 2 showing that the first disjunct fails, so that we need not worry about the first horn. In section 3, I examine the argument that leads from the second disjunct to the second horn and narrow our focus to an issue on which the conclusion rests, namely, a question about the identity of certain states of affairs. In section 4, I examine the second puzzling argument and conclusion from MacBride’s (2014) paper and connect the argument there with the issue on which we focused in section 3. Then in section 5, I review a theory of relations and states of affairs that MacBride doesn’t consider but which has consequences for the issues we’ve developed. In section 6 and section 7, I use the theory in section 5 to develop two alternative analyses of the issue (about the identity of states of affairs) on which both of MacBride’s puzzling conclusions rest. I show that these answers undermine the main lines of argument that MacBride uses to establish his conclusions.
From this overview, it should be clear that in sections 2–4, we’ll extend 2OL in known ways that systematize the language that MacBride uses in his arguments. However, starting in section 5, I’ll appeal to the theory of abstract objects developed in Zalta (1983, 1988, 1993), which I henceforth refer to as “object theory” (“OT”).4 OT extends 2nd-order logic in a way that allows us to state unproblematic identity conditions for relations and states of affairs. So my goal throughout will be to show that 2OL has been deployed and extended to formulate a theory of relations, predication, and states of affairs that forestalls the puzzling conclusions.
Before we begin, however, it is important to review some terminology and notation. “2OL” refers only to the formal, axiomatic system of second-order logic under an objectual interpretation (i.e., where the quantifiers range over domains of entities). My arguments don’t require that we interpret 2OL in terms of full models (where the domain of properties has to be as large as the full power set of the domain of individuals); instead, general models (where the domain of properties is only as large as some proper subset of the power set of the domain of individuals) suffice. The only requirement is that the models validate the axioms of 2OL. In what follows, I’ll represent a binary atomic predication as “\(Rab\)” instead of “\(aRb\),” except when we’re discussing identity, in which case I’ll use “\(a=b\)” (i.e., infix notation). As noted earlier, the atomic formulas of 2OL have the form “\(F^{n}x_{1}\ldots x_{n}\)” and can be read as “\(x_{1}\), \(\ldots\), and \(x_{n}\) exemplify (or instantiate) \(F^{n}\),” and we’ll often drop the superscript on \(F\) indicating arity since this can be inferred.
No explicit notion of order is required here; we only require that “\(Rab\)” and “\(Rba\)” say different things; to say \(a\) and \(b\) exemplify \(R\) is not to say \(b\) and \(a\) exemplify \(R\); to say \(x,y\), and \(z\) exemplify \(F\) is not to say \(x,z\), and \(y\) exemplify \(F\); and so on (more about this later). In these examples, the predicate can be replaced by any nominalized relation term of the right arity. Finally, I’ll use \(F,G,H,\ldots\) as 2nd-order variables; Greek letters will be used as metavariables instead. So when MacBride talks about the 2nd-order quantified sentence “\(\exists \Phi(a\Phi b)\),” I’ll represent this sentence as “\(\exists F(Fab)\).”
In the next few sections, we shall extend 2OL in various ways, in part to systematize the language that MacBride uses in his arguments. We’ll start with 2OL\(^{=}\), in which identity claims of the form “\(F^{n} = G^{n}\)” (for any \(n\)) are primitive.5 We’ll also treat states of affairs as \(0\)-ary relations, and instead of using \(F^{0},G^{0},\ldots\) as \(0\)-ary relation variables, we’ll use \(p,q,\ldots\,\). So identity claims such as “\(p=q\),” asserting the identity of states of affairs, are well-formed. Moreover, we’ll also make use of \(n\)-ary \(\lambda\)-expressions (\(n\geq 0\)), interpreted relationally; these are complex terms that denote relations and states of affairs.6 And we’ll let formulas be complex terms that denote states of affairs, so that when MacBride uses expressions like “\(aRb = bRa\)” and “\(aRb \neq bRa\)” (2014, 8), we can represent this talk precisely as identity and non-identity claims about the states of affairs denoted by the formulas flanking the identity symbol.7 When we extend 2OL to OT in section 5, we’ll add a new, primitive mode of predication and a primitive modal operator. Using OT, we’ll define the primitive claims of the form “\(F^{n} = G^{n}\)” (for \(n\geq 1\)) and “\(p = q\)”; thus, we’ll provide identity conditions for relations and states of affairs. I’ll then be in a position to argue that OT thereby offers an analysis of “\(aRb = bRa\)” or “\(aRb \neq bRa\)” without facing any dilemmas.
It is also important to spend some time explaining how we plan to use the technical term predicate. First, we shall almost always be discussing the predicates of 2OL that serve to represent the predicates of natural language sentences. But the predicates of 2OL are not the same kind of expression as the predicates of natural language. When speaking of natural language sentences, it is traditional to distinguish the “subject” of a sentence from the “predicate.” For example, in the sentence “John is happy,” “John” is the subject and “is happy” is the predicate; and in the sentence “John loves Mary,” “John” is the subject and “loves Mary” is the predicate. In the case of the latter sentence, one could also say that “loves” is the predicate, while “John” and “Mary” are the subjects (though “Mary” is often called the direct object). Thus, natural language predicates are not usually thought of as names or as nominalized expressions, for there is a sense in which these predicates are incomplete expressions.
But in what follows, we will be representing natural language predicates in terms of formal expressions that denote relations, and we’ll be calling those formal expressions “predicates.” Before I give the definition, however, let me mention that we shall not adopt the definition of predicate that MacBride introduces in the following passage (citing Dummett 1981, 38–39), in which he gives examples in terms of the expressions in a formal language:
[W]hat is a second-order predicate? A first-order predicate (say of the form “\(F\xi\)”) results from the extraction of one or more names (“\(a\)”) from a closed sentence (“\(Fa\)”) in which it occurs and inserting a variable in the resulting gap. A second-order predicate (say, of the form “\(\exists x\Phi x\)”) results from the extraction of a first-order predicate (“\(F\xi\)”) from a closed sentence (“\(\exists xFx\)”) and inserting a variable into the resulting gap. (MacBride 2022, 2–3)
In a footnote to this passage, MacBride makes it clear that open formulas, such as “\(Lax\),” “\(\neg Rxa\),” and “\(Px \to Qy\)” (in which \(x\) and \(y\) are the only variables), qualify as predicates. But in what follows, I shall distinguish between open formulas and predicates.
I shall use the term “predicate” to refer to a relation term \(\Pi\) (i.e., a relation constant, a relation variable, or a \(\lambda\)-expression) that can occur in an atomic predication. In classical logic, in which atomic predications take the form \(\Pi\kappa_{1}\ldots\kappa_{n}\), the expression \(\Pi\) is a predicate. So where “\(L\)” might be used to represent the loves relation, I’ll distinguish between the predicate “\(L\)” and the open formula “\(Lax\).” The open formula is not a predicate and doesn’t name a property (i.e., unary relation); we can’t directly infer “\(\exists F(Fx)\)” or “\(\exists F(Fa)\)” from “\(Lax\).” The open formula “\(Lax\)” does have truth conditions and, given an assignment to the variable \(x\), denotes a state of affairs. By contrast, when we add \(\lambda\)-expressions a bit later, we regard the complex unary relation term “\([\lambda x\:Lax]\)” as a predicate. We can combine it with “\(b\)” to form the atomic predication “\([\lambda x\:Lax]b\)” (“\(b\) exemplifies being an \(x\) such that \(a\) and \(x\) exemplify the loves relation,” or more simply, “\(b\) exemplifies being loved by \(a\)”).8 And “\([\lambda xy\:\neg Lxy]\)” is a predicate because we can form the atomic statement “\([\lambda xy\:\neg Lxy]ab\).”
Thus, the predicates of 2OL and 2OL\(^{=}\) denote properties and relations. Variables such as \(F\), \(G\), etc. are also predicates since the expressions “\(Fa\),” “\(Gxy\),” etc. are well-formed atomic formulas; the variables \(F\), \(G\), etc. denote properties and relations relative to an assignment to the variables. To consider a more complex example, let “\(E\)” denote being even and “\(P\)” denote being prime. Then, when we replace the constant “\(2\)” with “\(x\)” in the complex closed sentence “\(E2\mathbin{\&}P2\)” (“\(2\) exemplifies being even and \(2\) exemplifies being prime”), we obtain “\(Ex\mathbin{\&}Px\).” This latter expression isn’t a predicate—it can’t be predicated of anything since it is a conjunction of two statements. Relative to any variable assignment, “\(Ex\mathbin{\&}Px\)” has truth conditions and denotes a (complex) state of affairs. Semantically, one can define a sense in which an individual in the domain can satisfy this open formula (namely, Tarski’s sense), but this is not to say that the open formula can be predicated of that individual or predicated of the individual term “\(a\).” By contrast, the complex unary relation term “\([\lambda x\:Ex\mathbin{\&}Px]\)” can be combined with an individual constant to form a predication; that is, we can form the predication “\([\lambda x\:Ex\mathbin{\&}Px]2\),” which predicates the property denoted by the \(\lambda\)-expression of an individual. And in 2OL and 2OL\(^{=}\), we can infer “\(\exists F(F2)\)” from “\([\lambda x\:Ex\mathbin{\&}Px]2\).” So whereas we call “\([\lambda x\:Ex\mathbin{\&}Px]\)” a predicate, we won’t call “\(Ex\mathbin{\&}Px\)” a predicate.
Similarly, we shall not say that the open formulas “\(Fab\)” and “\(Fa\mathbin{\&}Qb\)” (where “\(F\)” is a free variable and the other letters are constants) are 2nd-order predicates. These are open formulas that denote states of affairs relative to an assignment to the free variable \(F\). As such, these expressions are \(0\)-ary relation terms, i.e., terms that denote states of affairs (relative to any variable assignment). By contrast, the higher-order \(\lambda\)-expressions “\([\lambda F\:Fab]\)” and “\([\lambda F\:Fa\mathbin{\&}Qb]\)” are predicates of 3rd-order logic (3OL); these are expressions constructed from the open formulas “\(Fab\)” and “\(Fa\mathbin{\&}Qb\).” The expressions “\([\lambda F\:Fab]\)” and “\([\lambda F\:Fa\mathbin{\&}Qb]\)” are part of the language of 3OL because they denote properties of relations. These predicates can be used to form predications in 3OL such as “\([\lambda F\:Fab]R\),” i.e., \(R\) exemplifies the property of being a relation \(F\) such that \(a\) and \(b\) exemplify \(F\). We’ll make use of these higher-order predicates later, at the point in the discussion when they become relevant.9
2 The First Horn
We can now outline and investigate MacBride’s argument about the interpretation of the 2nd-order quantifiers. It proceeds under the reasonable assumption that 2nd-order quantification is a straightforward generalization of 1st-order quantification (MacBride 2022, 2). So let’s suppose that the 1st- and 2nd-order quantifiers range over (mutually exclusive) domains and that the axioms and inference rules of the 2nd-order quantifiers mirror those of the 1st-order quantifiers. MacBride’s argument, to the conclusion that we cannot interpret 2nd-order quantifiers as ranging over relations, goes by way of a dilemma. Let’s call this the Dilemma for Converses. He presents the dilemma as follows (MacBride 2022, 1–2):
Dilemma for Converses
Either pairs of mutually converse predicates, such as “\(\xi\) is on top of \(\zeta\)” and “\(\xi\) is underneath \(\zeta\),” refer to the same underlying relation or they refer to distinct converse relations. If they refer to the same relation, then we lack the supply of the higher-order predicates required to interpret second-order quantifiers as ranging over a domain of relations. […] If, by contrast, mutually converse predicates refer to distinct converse relations, then whilst we can at least make abstract sense of the higher-order predicates required to interpret quantifiers as ranging over a domain of relations, the implausible consequences for the content of lower-order constructions render this interpretation of higher-order quantifiers a deeply implausible semantic hypothesis
We need not state the full argument for each horn of the dilemma now because it can be shown that, given the reasonable assumption that non-symmetric relations exist, the condition leading to the first horn of the Dilemma for Converses doesn’t hold in 2OL\(^{=}\). We spend the remainder of section 2 showing this, i.e., that mutually converse predicates do not refer to the same relation.
Since MacBride’s argument in the Dilemma for Converses involves claims about converse relations, let us define:
- \(G\) is a converse of \(F\) if and only if, for any objects \(x\) and \(y\), \(x\) and \(y\) exemplify \(G\) iff \(y\) and \(x\) exemplify \(F\), i.e.,
(2) \(\textit{ConverseOf\thinspace}(G,F) \equiv_{\mathit{df}} \forall x\forall y(Gxy \equiv Fyx)\)
In addition, the argument in the Dilemma for Converses concerns the identity and distinctness of converses and so involves statements of the form “\(R=S\)” and “\(R\neq S\).” Thus, to see that the condition leading to the first horn of the Dilemma is false, i.e., to see that it is not the case that mutually converse predicates refer to the same underlying relation, we only need to show that there are converses \(F\) and \(G\) that aren’t identical:
(3) \(\exists F\exists G(\textit{ConverseOf\thinspace}(G,F)\mathbin{\&}G \neq F)\)
Any predicates that witness this claim will show that not all predicates for converses denote the same underlying relation.
Though (3) is not a theorem of 2OL\(^{=}\), it is implied by a theorem of 2OL\(^{=}\) under the assumption that there are non-symmetric relations. To see how, let us first define:
- \(F\) is non-symmetric if and only if it is not the case that for any objects \(x\) and \(y\), if \(x\) and \(y\) exemplify \(F\), then \(y\) and \(x\) exemplify \(F\), i.e.,10
(4) \(\textit{Non-symmetric}(F)\equiv_{\mathit{df}}\neg\forall x\forall y(Fxy\to Fyx)\)
Given this definition, the assumption and theorem needed to establish (3) may be represented as follows:
(5) \(\exists F(\textit{Non-symmetric}(F))\)
(6) \(\forall F(\textit{Non-symmetric}(F) \to \exists G(\textit{ConverseOf\thinspace}(G,F)\mathbin{\&}G \neq F))\)
As mentioned above, (5) is a reasonable assumption that MacBride adopts in his paper. So if we can show that (6), i.e., the formal representation of (1), is a theorem of 2OL\(^{=}\), it then will be a simple matter to show that (3) follows from (5) and (6).
2.1 The Reasoning
Two facts about 2OL\(^{=}\) have to be mentioned before we begin. First, 2OL\(^{=}\) includes the standard two axioms that logic texts use to systematize identity claims, namely, the reflexivity of identity and the substitutivity of identicals.11
Second, where \(n\geq 0\), 2OL\(^{=}\) includes the following comprehension axiom schema of 2OL:
\(\textit{ConverseOf\thinspace}(G,F) \equiv_{\mathit{df}} \forall x\forall y(Gxy \equiv Fyx)\) (CP) Comprehension Principle for Relations \(\exists F^{n}\forall x_{1}\ldots\forall x_{n}(F^{n}x_{1}\ldots x_{n}\equiv\varphi)\), provided \(F^{n}\) doesn’t occur free in \(\varphi\).
We may read this as: there exists an \(n\)-ary relation \(F\) such that any objects \(x_{1},\ldots,x_{n}\) exemplify \(F\) if and only if \(\varphi\). In the case where \(n=0\) and “\(p\)” is used as a \(0\)-ary variable instead of “\(F^{0}\),” then (??) asserts \(\exists p(p \equiv \varphi)\), i.e., there exists a state of affairs \(p\) such that \(p\) obtains if and only if \(\varphi\). Note that we read “\(p\)” as it occurs in “\(p\equiv\varphi\)” as “\(p\) obtains,” since (a) “\(p\)” occurs as a formula and (b) obtains for states of affairs is the \(0\)-ary case of exemplification. The \(0\)-ary case of (??) will be of service later, but for now we focus on the cases of (??) where \(n\geq 1\).
Before we show how 2OL\(^{=}\) yields (6) as a theorem, a few words about the role (??) plays in 2OL\(^{=}\) are in order. First, it is often thought that 2OL and 2OL\(^{=}\) require a large ontology of relations simply in virtue of including (??) as an axiom. After all, in the unary case, (??) has instances such as the following:
\(\exists F\forall x(Fx \equiv \neg Gx)\)
(Any given property) \(G\) has a negation.\(\exists F\forall x(Fx \equiv Gx\mathbin{\&}Hx)\)
(Any given properties) \(G\) and \(H\) have a conjunction.\(\exists F\forall x(Fx \equiv \exists yKyx)\)
There is a property that objects exemplify whenever a binary relation \(K\) is projected into its first argument place.
And in the binary case, (??) has instances like the following:
- \(\exists F\forall x\forall y(Fxy \equiv
Kyx)\)
(Any given relation) \(K\) has a converse.
Since these claims hold for any relations \(G\), \(H\), and \(K\), it might seem that (??) commits one to a large ontology.
But in fact, the smallest models of 2OL and 2OL\(^{=}\) require only that the domain of \(n\)-ary relations contains just two relations, for each \(n\). In what follows, we’ll focus on 2OL\(^{=}\), though the same reasoning applies to 2OL. So how can it be that 2OL\(^{=}\) requires only that the domain of \(n\)-ary relations contains just two relations, for each \(n\)? The answer is: the smallest models of 2OL\(^{=}\) make (??) true by identifying properties and relations with the same extension. More specifically, in the smallest models of 2OL\(^{=}\), (i) the domain of individuals contains just a single element, say \(b\); (ii) the domain of unary relations contains just two properties—one exemplified by \(b\) and one exemplified by nothing; (iii) the domain of binary relations contains just two relations—one that relates \(b\) to itself and one that is empty; and so on. For example, if we let \(P_{1}\) be the property that is exemplified by \(b\) and \(P_{2}\) be the empty property, then \(P_{2}\) is the negation of \(P_{1}\) and vice versa. Moreover, the conjunction of \(P_{1}\) with itself is just \(P_{1}\); the conjunction of \(P_{2}\) with itself is just \(P_{2}\); and the conjunction of \(P_{1}\) with \(P_{2}\) (and the conjunction of \(P_{2}\) with \(P_{1}\)) is just \(P_{2}\), since nothing exemplifies both \(P_{1}\) and \(P_{2}\). And so on for the other unary instances of (??). Now for the case of binary relations, let \(R_{1}\) be the relation that relates \(b\) to itself, and \(R_{2}\) be the empty relation. Then \(R_{1}\) is the negation of \(R_{2}\), and vice versa. Moreover, \(R_{1}\) and \(R_{2}\) both have converses—each has itself as a converse. \(R_{1}\) is a converse of itself because \(R_{1}bb \equiv R_{1}bb\), and \(R_{2}\) is a converse of itself for a similar reason, though in this second case, the biconditional \(R_{2}bb \equiv R_{2}bb\) is true because both sides are false. And so on for the other binary instances of (??).
So if we don’t add any distinguished, theoretical properties and relations, 2OL\(^{=}\) doesn’t commit us to much at all. But though 2OL\(^{=}\) does commit us to the existence of converse relations, it does not commit us to the existence of non-symmetric relations. In the smallest models of 2OL\(^{=}\), as we just saw, there are only two binary relations; we’ve called them \(R_{1}\) and \(R_{2}\). Note that both \(R_{1}\) and \(R_{2}\) are symmetric; they both satisfy the open formula \(\forall x\forall y(Fxy\to Fyx)\). \(R_{1}\) satisfies this formula because \(b\) is the only object that can instantiate the 1st-order quantifiers and \(R_{1}bb \to R_{1}bb\) is a theorem of logic; it is an instance of the tautology \(\varphi\to\varphi\) (note that the consequent is true and so the whole conditional is true). \(R_{2}\) is symmetric because, again, \(b\) is the only object that can instantiate the 1st-order quantifiers and the tautology \(R_{2}bb\to R_{2}bb\) is again a theorem of logic (note that the antecedent is false, and so the whole conditional is true). We can consider this same point proof-theoretically: the claim \(\exists F(\textit{Non-symmetric}(F))\) is not a theorem of this logic.12
Of course, (6) can still be true even if there are no non-symmetric relations, by failure of the antecedent. But the key fact is not that (6) is true independently of the existence of non-symmetric relations, but that it is derivable as a theorem. The proof doesn’t depend on the existence of non-symmetric relations, doesn’t employ any analysis of predication, and doesn’t require any particular semantic interpretation of the domain over which the relation variables range. I’ve put the proof in a footnote.13 So the formal representation of (1), namely (6), is a theorem of 2OL\(^{=}\).
But the combination of (6) with the reasonable assumption (5) yields the conclusion that there are mutually converse predicates that don’t refer to the same underlying relation. For let “\(R\)” be a witness to assumption (5), so that we know \(\textit{Non-symmetric}(R)\). Then, by (6), we obtain the conclusion \(\exists G(\textit{ConverseOf\thinspace}(G,R)\mathbin{\&}G \neq R)\), which tells us that \(R\) has a distinct converse. But we’re not quite done; the condition leading to the first horn of the Dilemma for Converses is about predicates, and to show that it is false, we need a bit more reasoning and semantic ascent. So let “\(S\)” be a witness to our last result, so that we know \(\textit{ConverseOf\thinspace}(S,R)\mathbin{\&}S \neq R\). Then, by semantic ascent, we have established that the predicates “\(R\)” and “\(S\)” denote converse relations that are distinct. Thus, the condition leading to the first horn of the Dilemma for Converses, namely that pairs of mutually converse predicates refer to the same underlying relation, fails in 2OL\(^{=}\) under any interpretation. We therefore need to consider only the second horn.
2.2 Simplifying the Reasoning
Before we turn to the second horn of MacBride’s Dilemma for Converses in section 3, it is relevant, and of significant interest, that (1) can be represented, and its proof developed much more elegantly, if we add \(\lambda\)-expressions to 2OL\(^{=}\). \(\lambda\)-expressions are complex terms that denote relations, and they will play an important role in what follows. We begin the explanation of how \(\lambda\)-expressions simplify our definitions and theorems about converses by saying a few words about the logic that results when we add these expressions.14 Assume, therefore, that we have added complex, \(n\)-ary relation terms of the form \([\lambda x_{1}\ldots x_{n}\:\varphi]\) to the definition of our language (\(n\geq 0\)) given in footnote 7. When \(n\geq 1\), we read \([\lambda x_{1}\ldots x_{n}\:\varphi]\) as being objects \(x_{1},\ldots ,x_{n}\) such that \(\varphi\); when \(n=0\), we read \([\lambda\:\varphi]\) as that-\(\varphi\). Thus, \(\lambda\)-expressions do not denote functions, as in the functional \(\lambda\)-calculus, but rather relations, and in the \(0\)-ary case, they denote states of affairs. A simple predication like “\([\lambda x\:\neg Px]y\)” asserts that \(y\) exemplifies being an object x that fails to exemplify P, and “\([\lambda\:\neg Rab]\)” denotes the state of affairs that a and b don’t exemplify R.
By adding \(\lambda\)-expressions to 2nd-order logic, we can replace (??) by:
\(\lambda\)-Conversion (\(\lambda\)C) \([\lambda x_{1}\ldots x_{n}\:\varphi]x_{1}\ldots x_{n}\equiv\varphi\)
This asserts: \(x_{1},\ldots ,x_{n}\) exemplify being objects \(x_{1},\ldots ,x_{n}\) such that \(\varphi\) if and only if \(\varphi\). For example, \([\lambda xy\:\neg Fxy]xy \equiv \neg Fxy\) is an instance, and by universal generalization, it is a theorem of the relational \(\lambda\)-calculus that:
\(\forall F\forall x\forall y([\lambda xy\:\neg Fxy]xy \equiv \neg Fxy)\)
To see how this works, instantiate this theorem to an arbitrary binary relation \(R\) and then to arbitrary objects \(a\) and \(b\). The result is the instance: \([\lambda xy\:\neg Rxy]ab \equiv \neg Rab\).15
As previously mentioned, (\(\lambda\)C) eliminates the need for (??) since the latter becomes derivable. The proof is left to a footnote.16 This applies even to the \(0\)-ary case of (\(\lambda\)C). When \(n=0\), (\(\lambda\)C) asserts \([\lambda\:\varphi]\equiv\varphi\), i.e., that-\(\varphi\) obtains if and only if \(\varphi\).17 For example, the formula \([\lambda\:\neg Lmj]\equiv\neg Lmj\) might be used to represent the claim: (the state of affairs) that-Mary-doesn’t-love-John obtains if and only if Mary doesn’t love John. Note that the \(0\)-ary case of (??) immediately follows from the \(0\)-ary case of (\(\lambda\)C), by Existential Introduction.18 Again, the \(0\)-ary case of (\(\lambda\)C) will play a role later, but for now, let’s focus on the cases where \(n\geq 1\).
We can use \(\lambda\)-expressions to introduce a well-behaved converse operator \((\:)^{*}\) on predicates by taking advantage of \(\lambda\)-expressions. Where \(F\) is a binary relation, we may define the converse of \(F\), i.e., \(F^{*}\), as being an x and y such that y and x exemplify F, i.e.,
(7) \(F^{*} =_{\mathit{df}} [\lambda xy\:Fyx]\)
Note how this definition immediately implies that every relation has a converse, where this is expressible as \(\forall F\exists G(G=F^{*})\).19 A fortiori, every non-symmetric relation has a converse. Thus, we can now represent and prove (1) more elegantly as the claim that for any binary relation \(F\), if \(F\) is non-symmetric, then its converse \(F^{*}\) is distinct:20
(8) \(\forall F(\textit{Non-symmetric}(F)\to F^{*}\neq F)\)
Again, I’ve put the proof in a footnote,21 and I encourage the reader to compare the proof of (8) in footnote 21 with the proof of (6) in footnote 13 to confirm how \(\lambda\)-expressions simplify the reasoning. Thus, as soon as we instantiate the reasonable assumption (5) to an arbitrary predicate, say “\(R\),” to conclude \(\textit{Non-symmetric}(R)\), we can immediately instantiate the new predicate “\(R^{*}\)” into (8) and then conclude \(R \neq R^{*}\). So by semantic ascent, the condition leading to the first horn of the Dilemma for Converses is false.
Thus, when we add \(\lambda\)-expressions to 2OL\(^{=}\), the concepts and claims simplify and clarify. I’ll therefore use (8) as the clearer representation of (1) in what follows. But my analysis will apply to (6) as well. Both (6) and (8) have been established as formal theorems without any analysis of predication or any semantic arguments about converses.
3 The Second Horn
MacBride’s Dilemma for Converses concludes that the quantifiers of 2OL don’t range over relations, and we’ve now seen that the first horn of the dilemma fails in 2OL\(^{=}\) (i.e., the logic needed to systematize talk about the identity or distinctness of relation converses). The argument in the second horn was sketched at the beginning of section 2 above. But a fuller sketch of the argument emerges later in the paper, beginning in the following passage:
But even if pairs of mutually converse relations are admitted, thus avoiding the difficulties that arose from dispensing with them, higher-order predicates of the form ‘\(a\,\Phi\,b\)’ are still required for the intelligibility of quantification into the positions of converse predicates, i.e., higher-order predicates capable of being true or false of a relation belonging to the domain independently of how that relation is specified. […]
[…] [D]o we have an understanding of higher-order predicates of the form “\(a\,\Phi\,b\)” which will enable us to interpret second-order quantification as quantification over a domain of relations? I will argue that we don’t. (2022, 14)
Before we look at the specific way in which MacBride argues for this conclusion, let’s first make the language that MacBride needs to present his argument a bit more precise.
3.1 Third-Order Language and Logic (3OL)
I shall suppose that MacBride’s language is 3rd-order, since he wants to formulate higher-order predicates capable of being true or false of relations. If we use \(\lambda\)-expressions, we can formally represent the higher-order property connected with the open formula “\(Fab\)” as \([\lambda F\:Fab]\). We read this \(\lambda\)-expression as: being a relation \(F\) such that \(a\) and \(b\) exemplify \(F\). So let us take on board the resources of a 3rd-order language and logic (3OL), including monadic, higher-order \(\lambda\)-expressions of the form \([\lambda F\:\varphi]\) for denoting complex properties of relations. 3OL lets us quantify over, and denote, properties of relations such as \([\lambda F\:\forall xFxx]\) (“being a relation \(F\) that is reflexive”) and such as \([\lambda F\:\neg \forall x\forall y(Fxy \to Fyx)]\) (“being a relation that is non-symmetric”), etc.
In 3OL, \(\lambda\)-expressions of the form \([\lambda F\:\varphi]\) are governed by the following schema:
(Monadic) Third – Order \(\lambda\)-Conversion (3\(\lambda\)C) \([\lambda F\:\varphi]F \equiv \varphi\)
I.e., \(F\) exemplifies being a relation such that \(\varphi\) if and only if \(F\) is such that \(\varphi\). So by Universal Generalization, the following is a theorem schema of 3OL:
(9) \(\forall F([\lambda F\:\varphi]F \equiv \varphi)\)
With this formalization in mind, we can return to MacBride’s argument.
MacBride argues that in order for “\(\exists F(Fab)\)” to be interpreted as quantifying over relations, we have to be able to grasp the higher-order predicate associated with the expression “\(Fab\)” as being true or false of relations independently of how such relations are named or picked out. He then proceeds to consider and reject a number of proposals for so understanding “\(Fab\).”
3.2 The First Argument for the Second Horn
The first proposal that MacBride considers, and rejects, appeals to the determinate-determinable distinction. Earlier in his paper, he defined “\(Fab\)” as having a determinable significance when it “is true of the referent \(R\) of a first-level predicate […] just in case \(R\) relates [\(a\)] to [\(b\)] in some manner or other but without settling any determinate arrangement for them” (2022, 9). He now argues that the suggestion, that “\(Fab\)” has a determinable significance, gets the truth conditions wrong for non-symmetric relations. Let us use sentences numbered in square brackets to reference the numbered sentences in MacBride’s paper and consider these two sentences:
[1] Alexander is on top of Bucephalus.
[8] \(\neg\,\)Bucephalus is on top of Alexander.
He says, in connection with these sentences:
If ‘Alexander \(\Phi\) Bucephalus’ has purely determinable significance, then ‘Bucephalus \(\Phi\) Alexander’ does too, but they will mean the same. The latter will stand for a property that a relation has if it relates Bucephalus and Alexander in some manner or other. But a relation has the property of relating Bucephalus and Alexander in some manner or other iff it has the property of relating Alexander and Bucephalus in some manner or other—because the property of relating some things in some manner or other is order-indifferent. (2022, 15)
He then draws the conclusion that we can’t explain the valid inference from [1] to [8] given this analysis, for whereas [1] says that on top of has the order-indifferent property of relating Alexander and Bucephalus in some manner or other, [8] says that this relation doesn’t have that property.
MacBride quite rightly rejects the suggestion that “\(Fab\)” has a determinable significance, but for the wrong reasons. MacBride rejects the suggestion on the grounds that it can’t explain the valid inference from [1] to [8], but I think we can reject the suggestion because, as we’ll see below, (3\(\lambda\)C) already shows that “\(Fab\),” “\(Fba\),” and “\(\neg Fba\)” have a determinate rather than a determinable significance. Before we examine this claim in more detail, let me first put one issue aside, to be revisited later (in the context of the next suggestion), namely, whether [1] and [8] say what MacBride claims that they say. I don’t think they do, but we need not develop the issue at this point.
Instead, we can see that “\(Fab\),” “\(Fba\),” and “\(\neg Fba\)” have a determinate significance by considering the higher-order predicates of relations that can be constructed with the help of these formulas. We may represent the higher-order properties signified as \([\lambda F\:Fab]\), \([\lambda F\:Fba]\), and \([\lambda F\:\neg Fba]\). These higher-order properties are all well-defined. To see why, let \(\varphi\) in (9) be, successively, \(Fab\), \(Fba\), and \(\neg Fba\), and instantiate the quantifier \(\forall F\) to the relation \(R\) in each case. Then all of the following are theorems of 3OL derivable from (3\(\lambda\)C):
(10) \([\lambda F\:Fab]R \equiv Rab\)
(11) \([\lambda F\:Fba]R \equiv Rba\)
(12) \([\lambda F\:\neg Fba]R \equiv \neg Rba\)
These are not schemata. (10) says: relation \(R\) exemplifies being a relation F such that a and b exemplify F just in case \(a\) and \(b\) exemplify \(R\). (11) says: \(R\) exemplifies being a relation F such that b and a exemplify F just in case \(b\) and \(a\) exemplify \(R\). And (12) says: \(R\) exemplifies being a relation F that b and a fail to exemplify just in case \(b\) and \(a\) fail to exemplify \(R\).
Thus, “Alexander \(\Phi\) Bucephalus” (“\(Fab\)”) and “Bucephalus \(\Phi\) Alexander” (“\(Fba\)”) have a determinate significance represented, respectively, by the higher-order properties \([\lambda F\:Fab]\) and \([\lambda F\:Fba]\). Moreover, they clearly don’t mean the same; they aren’t even materially equivalent. \([\lambda F\:Fab]\) is exemplified by \(R\), given the fact that \(Rab\) and (10), and \([\lambda F\:Fba]\) fails to be exemplified by \(R\), given the fact that \(\neg Rba\) and (11). So we need not accept the proposal that “Alexander \(\Phi\) Bucephalus” has a determinable significance, nor the premise about what that hypothesis implies for understanding [1] and [8]. The fact is, expressions of the form “\(Fab\)” can be interpreted in terms of determinate higher-order properties, as we have just done, and so (10) gives us the philosophical means for understanding the open formula “\(Fab\)” for an arbitrary relation \(R\).
3.3 The Second Argument for the Second Horn
The next proposal that MacBride considers and rejects is the suggestion that we understand “\(Fab\)” in terms of a higher-order property of relations in which ordinal notions (“first,” “second”) play some role. In particular, the proposal under consideration is that “\(Fab\)” is to be understood in terms of the higher-order property that a relation has if it applies to \(a\) first and \(b\) second. MacBride develops an extended argument (2022, 16–28) against this proposal by advancing a number of considerations. At the end, he concludes: “[…] we lack a grasp of the higher-order predicates required to characterize relations in a higher-order setting, a grasp that is appropriately rooted in our understanding of atomic statements” (2022, 25). This conclusion is then supposed to entail that we can’t understand the quantified formula “\(\exists F(Fab)\)” as quantifying over relations.
Let’s grant that the entailment holds. Then we can respond to the argument by showing that we do have a grasp of the higher-order predicates required to understand quantification over relations. Fortunately, we don’t have to go through the extended argument in detail because we can demonstrate that our grasp of these higher-order predicates is embodied by (3\(\lambda\)C). Over the next few paragraphs, I (a) show why (3\(\lambda\)C) is the right principle, (b) defuse some reasons that might be offered as to why it isn’t, (c) show how (3\(\lambda\)C) helps us to undermine some of the claims MacBride makes during the course of his argument for the second horn, and (d) narrow our focus to a question that is, at least in part, driving MacBride’s concern about quantification over relations.
Clearly, (3\(\lambda\)C) is a logical principle, and it states exemplification (i.e., “application”) conditions for the higher-order properties denoted by predicates of the form \([\lambda F\:\varphi]\). So, we do not lack a principled grasp of the higher-order predicate “\([\lambda F\:Fab]\)” that is formulable from the open formula “\(Fab\).” We saw that (10) is an instance of (3\(\lambda\)C) and so offers a principled statement of the application conditions of the higher-order property \([\lambda F\:Fab]\). Clearly, one must distinguish the open formula “\(Fab\)” from the closed predicate “\([\lambda F\:Fab]\)” to even formulate (3\(\lambda\)C).
MacBride does seem to recognize that (3\(\lambda\)C) forms the basis of a genuine response to his argument, for he subsequently considers an informal version of (3\(\lambda\)C). He writes:
Might there be an alternative interpretation of higher-order predicates of the form ‘\(a\Phi b\)’ over which we have more control and which will facilitate an interpretation of second-order quantifiers as ranging over a domain of relations? The ordinary language construction “—bears---to___,” as it figures in
[14] Alexander bears a great resemblance to Philip,
might appear to be a promising candidate for a construction in which our understanding of a predicate of the form ‘\(a\Phi b\)’ might be rooted. Roughly speaking, the idea is that a relation \(R\) satisfies the predicate ‘\(a\Phi b\)’ just in case \(a\) bears \(R\) to \(b\), whereas \(R\) satisfies ‘\(b\Phi a\)’ just in case \(b\) bears \(R\) to \(a\). (2022, 22–23)
MacBride then argues against this idea (2022, 23–24). But I will not examine the details of this particular argument, for it appears to challenge the intelligibility of a well-known logical principle, namely \(\lambda\)-Conversion (\(\lambda\)C), in its higher-order guise as (3\(\lambda\)C). I take both principles to be perfectly intelligible; they axiomatize complex predicates of the form \([\lambda\alpha\:\varphi]\) by precisely identifying their exemplification (or application) conditions. To my mind, the discussion in (2022, 23–24) doesn’t clearly separate the logic from the way natural language is to be represented in that logic.
Note that one can’t reject (3\(\lambda\)C) on the grounds that it is trivial. One might argue that (3\(\lambda\)C) trivially recasts the open formula as a higher-order predicate and so doesn’t help us understand “\(Fab\)” or the higher-order property in question. But neither (\(\lambda\)C) in 2OL nor (3\(\lambda\)C) in 3OL are trivial. (\(\lambda\)C) in 2OL is a significant principle that is an integral part of the \(\lambda\)-calculus of relations and thus one of the key axioms for axiomatizing relations (see Zalta 1983, 69; 1993, 406; Menzel 1986, 38; and Menzel 1993, 84). It is stronger than (??) (it implies (??), as we’ve seen, but (??) doesn’t imply it), and it is not plausible to suggest that (??) is a trivial principle. (3\(\lambda\)C) has a similar significance in 3OL.22
By systematizing the distinction between an open formula such as “\(Fab\)” and the higher-order predicate “\([\lambda F\:Fab]\),” it becomes clear that (3\(\lambda\)C) may even be an assumption of MacBride’s paper that addresses the concern he raises, since the right-to-left direction of (3\(\lambda\)C) tells us that if a relation \(R\) satisfies the open formula “\(Fab\),” then \(R\) exemplifies the higher-order property \([\lambda F\:Fab]\). And since (3\(\lambda\)C) is a biconditional that implies the converse of this last claim, we forestall MacBride’s conclusion that we lack a principled understanding of the application conditions of “\(Fab\).”23
So if (3\(\lambda\)C) gives a principled account of the significance of open formulas and the higher-order predicates we can build with such formulas, what then is really driving the concerns that MacBride has about quantifying over relations? To understand the root of the concerns, we have to consider one of the specific arguments that MacBride presents. He spends all of section 6 considering the consequences of supposing that relations hold between the objects they relate in an order. The underlying root of his concerns emerges when we consider the “untoward consequences” that allegedly result if we were to understand “\(Fab\)” in terms of a higher-order property that a relation has if it applies to \(a\) first and \(b\) second (2022, 17).
Now in the present paper, we’re not committed to reading the formula “\(Fab\)” as “\(F\) applies to \(a\) first and \(b\) second.” The notion of applying to … in an order isn’t a primitive of our logic; of course, one is tempted to say it is the position or place in the relation that \(a\) and \(b\) have to occupy rather than the order of application. But our logic isn’t even committed to that much; it isn’t committed to the existence of positions or places in a relation as entities (see Fine 2000, 16, for a defense of anti-positionalism). Our reading of “\(Fab\)” as “\(a\) and \(b\) exemplify \(F\)” doesn’t explicitly say that \(a\) occupies the first position (or place) of \(F\) and \(b\) the second.24 Similarly, when we read the predicate “\([\lambda F\:Fab]\)” as “being an \(F\) such that \(a\) and \(b\) exemplify \(F\),” this doesn’t require us to say further that \(F\) is such that \(a\) occupies its first position (or place) and \(b\) its second. But let’s grant, for the sake of argument, that the higher-order predicate involves ordinal notions in the way MacBride suggests and read it as “being an \(F\) such that \(F\) applies to \(a\) first and \(b\) second.” Under this reading, (3\(\lambda\)C) remains true. MacBride then considers symmetric and non-symmetric relational statements and, in each case, finds reasons to question the understanding of “\(Fab\)” in terms of ordinal notions. For example, with respect to the symmetric relation differs from, he argues that “Darius differs from Alexander” and “Alexander differs from Darius” intuitively say the same thing, but given the understanding of the open formulas “\(Fda\)” and “\(Fad\)” that we’re now considering, these formulas say different things. He argues:
Since second-order logic permits existential quantification into the positions of symmetric predicates, it follows—assuming the proposed interpretation of higher-order predicates—that atomic statements in which symmetric predicates occur attribute to symmetric relations the property of applying to the things they relate in an order. But it is far from plausible that they do. Consider, for example,
[9] Darius differs from Alexander
and
[10] Alexander differs from Darius.
If predicates of the form “\(a \Phi b\)” mean what they’re proposed to mean, then [9] says that the relation picked out by “\(\xi\) differs from \(\zeta\)” applies to Darius first and Alexander second, whereas [10] says that it applies to Alexander first and Darius second. But, as both linguists and philosophers have reflected, prima facie statements like [9] and [10] don’t say different things but are distinguished solely by the linguistic arrangements of their terms. (2022, 17)
Although MacBride cites a number of authorities for his last claim, he also mentions that Russell (1903, sec. 94) argued against it and for the view that statements like [9] and [10] express distinct propositions.
Before I examine this argument, let me return to one issue. I don’t accept that [9] says what MacBride claims it says. [9] does not say, nor can one derive in 2OL or 3OL that it says, “the relation picked out by ‘\(\xi\) differs from \(\zeta\)’ applies to Darius first and Alexander second,” as MacBride suggests. For one thing, [9] doesn’t say anything about predicates picking out, or denoting, relations. Instead, [9] simply says Darius differs from Alexander (or, when regimented as \(d \neq a\), [9] says “\(d\) and \(a\) exemplify being non-identical”). Of course, when we regiment [9] as “\(d \neq a\)” and use 3OL, we can also instantiate our sentence (9) in section 3.1 to the non-identity relation \(\neq\) to obtain \([\lambda F\:Fda]{\neq} \equiv d \neq a\) and infer from this last fact and the representation of [9] that \([\lambda F\:Fda]{\neq}\), i.e., that the relation differs from exemplifies the higher-order property of being a relation Darius and Alexander exemplify. So, in what follows, I’ll treat MacBride’s reading of [9] not as what [9] says but as what [9] semantically implies in 3OL. And something similar applies to MacBride’s sentence [10].
Clearly, the crux of MacBride’s argument in the above passage is his view that [9] and [10] don’t say different things. But surely there is at least a sense of “says” in which [9] and [10] do say different things. If we ignore the particular symmetric relation involved and consider a non-symmetric relation, then to say “John loves Mary” is not to say “Mary loves John.” So MacBride’s argument must turn on a notion of “says” in which [9] and [10] say the same thing. For the purposes of discussion, the notion in question has to be something like “denote the same state of affairs.” He is convinced that they do, whereas I think this isn’t at all clear. The point at issue concerns the identity of states of affairs; if one allows, for example, that necessarily equivalent states of affairs may be distinct, it is by no means a fact that [9] and [10] say the same thing.25 Indeed, I hope to show in what follows that as long as we have a clear theory of relations and states of affairs (something that can be developed without the resources of 3OL), one can both (a) challenge the suggestion that [9] and [10] denote the same state of affairs and (b) argue that even if we leave the question open, we can still understand the application conditions of “\(Fab\)” and conclude that “\(\exists F(Fab)\)” quantifies over relations.26
But before we turn to the theory of relations and states of affairs that support this position, the second puzzling conclusion mentioned at the outset of the paper, namely the conclusion in MacBride (2014), becomes relevant. For the argument in that paper also turns, at least in part, on the question of the identity of states of affairs.
4 The Second Puzzling Conclusion
To state the second puzzling conclusion, which occurs in MacBride (2014), we have to recall the second of the three degrees of relatedness that MacBride distinguishes in that paper. He says, where \(R^{*}\) signifies the converse of \(R\), that “to embrace the second degree is to make the existential assumption that every non-symmetric relation has a distinct converse (\(R\neq R^{*}\))” (2014, 3). He then argues that relatedness in the second degree “spells trouble” and has “unwelcome consequences,” namely, that it “commits us to a superfluity of converse relations and states” (2014, 4). Let’s consider these claims in turn, i.e., by focusing first on the superfluity of relations and then on the superfluity of states.
Let me begin by suggesting that the superfluity of converse relations is not the main objection of the two. For recall that the conclusion in MacBride (2014) is that we should take relations and relation application as primitive. Since these notions are primitive in 2OL\(^{=}\), the conclusion MacBride draws in (2014) doesn’t eliminate the multiplicity of relations. For when (1) is represented as (6), it becomes a theorem of 2OL\(^{=}\), as we saw in section 2.1. So the multiplicity of converse relations arises even when relations and relation application are primitive (given the assumption that non-symmetric relations exist). And this holds not only for binary non-symmetric relations but also non-symmetric relations of higher arity.27 Though MacBride also suggests that we can’t name the relations given such a multiplicity, in fact we can denote them using \(\lambda\)-expressions.28 In any case, MacBride’s argument that relations and relation application should be taken as primitive doesn’t avoid the conclusion that there are a multiplicity of converse relations.
So the real problem about the fact that non-symmetric relations have distinct converses concerns the “profusion” of states of affairs. MacBride rehearses this problem by considering on and under, both of which are asymmetric (and hence non-symmetric if there are objects that stand in those relations):
It’s one kind of undertaking to put the cat on the mat, something else to put the mat under the cat, but however we go about it we end up with the same state. To bring the cat to the forefront of our audience’s attention we describe this state by saying that the cat is on the mat; to bring the mat into the conversational foreground we say that the mat is under the cat. But whether it’s the cat we mention first, or the mat, what we succeed in describing is the very same cat-mat orientation. That’s intuitive but if—as the second degree describes—a non-symmetric relation and its converse are distinct, we must be demanding something different from the world, a different state, when we describe the application of the above relation to the cat and the mat from when we describe the application of the below relation to the mat and the cat. (2014, 4)
The worry is that converse relations commit us to the principle that if \(R\) is non-symmetric, then for any \(x\) and \(y\), the state of affairs \(Rxy\) is distinct from the state of affairs \(R^{*}yx\). We can formally represent the allegedly problematic principle as follows:
(13) \(\forall F\Box (\textit{Non-symmetric}(F) \to \forall x\forall y(Fxy \neq F^{*}yx))\)
This, it is claimed, is counterintuitive, and MacBride cites Fine (2000) in support of his claim.29 If this is the concern, why not adopt the following principle instead:
- For any binary relation \(F\), necessarily, if \(F\) is non-symmetric, then for any \(x\) and \(y\), the state of affairs x and y exemplify F is identical to the state of affairs y and x exemplify \(F^{*}\), i.e.,
(14) \(\forall F\Box (\textit{Non-symmetric}(F) \to \forall x\forall y(Fxy = F^{*}yx))\)
The answer MacBride gives is (2014, 4):
We might attempt to defend the second degree by maintaining that the application of \(R\) and \(R^{*}\) does not give rise to different states with respect to the same relata but different decompositions of the same state. So whilst above and below are distinct, the relational configuration cat-above-mat is a decomposition of the same state as the configuration mat-below-cat. But these decompositions comprise what are ultimately different constituents—a non-symmetric relation and its converse are supposed to be distinct existences. But now we have the difficulty of explaining how such different decompositions can give rise to a single state.
So, again, the problem being raised is about the identity of states of affairs. In these cases, MacBride is confident that there is a single state involved.
Note that we’ve now connected up the issue on which MacBride’s (2022) paper turns with the issue on which his (2014) paper turns, namely, the identity of states of affairs. What gives rise to this problem is that 2OL and 2OL\(^{=}\) don’t have the resources to supply a good definition of the conditions under which states of affairs are identical, even if we add modality to the logic. For neither of the following definitions is a good one:
\(p = q \equiv_{\mathit{df}} p\equiv q\)
\(p = q \equiv_{\mathit{df}} \Box (p\equiv q)\)
It is reasonable to suppose that the state of affairs there is a barber who shaves all and only those who don’t shave themselves (\(\exists x(Bx\mathbin{\&}\forall y(Sxy\equiv\neg Syy))\)) is distinct from the state of affairs there is a brown and colorless dog (\(\exists x(Dx\:\&\:Bx\:\&\:\neg Cx)\)), yet these are not just equivalent but necessarily equivalent (since both are necessarily false).
So whereas both of the above definitions might be used to explain why \(Fxy = F^{*}yx\) (e.g., “they are identical because they are necessarily equivalent”), the definitions fail when states of affairs (or propositions) are regarded as hyperintensional entities. The identity conditions for states of affairs are more fine-grained than material or necessary equivalence. Furthermore, when \(F\) is non-symmetric, there is no obvious way to account for the identity of \(Fab\) and \(F^{*}ba\) by appealing to some notion of “constituents.” On what grounds, expressible in 2OL, would one claim that the distinct constituents \(F\), \(F^{*}\), \(a\), and \(b\) can be combined so that the identity \(Fab = F^{*}ba\) holds?30 And how can one state hyperintensional identity conditions for states of affairs that also allow us to assert, in the case of a non-symmetric relation \(F\), that \(Fab = F^{*}ba\)?
MacBride, as noted at the outset, finalizes this problem for any analysis of the identity (or non-identity) of states of affairs as a dilemma. We earlier provided an edited version of the argument to give the reader the general idea. But the passage posing the dilemma goes as follows, in full:
What vexes the understanding is the difficulty of disentangling one degree of relatedness from another when we try to provide an analysis of the fundamental fact that \(aRb \neq bRa\) for non-symmetric \(R\). We can usefully distinguish, albeit in a rough and ready sense, between two analytic strategies for explaining this fundamental fact—that the world exhibits relatedness in the first degree. Intrinsic analyses aim to account for the fact that \(aRb \neq bRa\) by appealing to features of those states themselves; extrinsic analyses attempt to account for their difference by appealing to features that aren’t wholly local to them. Anyone who wishes to give an analysis of the fact that \(aRb \neq bRa\) faces a dilemma. If they adopt the intrinsic strategy then they will find it difficult to avoid a commitment to either \(R\)’s converse or an inherent order in which \(R\) applies to the things it relates. Alternatively our would-be analyst can avoid entangling the first degree with the second and third by adopting the extrinsic strategy. But this approach embroils us in other unwelcome consequences. Since neither intrinsic nor extrinsic analyses are satisfactory, this recommends our taking the fact that \(aRb \neq bRa\) to be primitive. (2014, 8, italics in original)
I think MacBride reaches this conclusion because he doesn’t have a precise theory of relations and states of affairs to provide an answer. In the remainder of the paper, I show how object theory (OT) takes \(n\)-ary relations as primitive (including states of affairs, understood as \(0\)-ary relations), takes relation application (predication) as primitive, but defines identity for relations and states of affairs. These identity conditions don’t appeal to “decompositions” or “constituents.” Nevertheless, they allow one to consistently assert that (some) necessarily equivalent relations and states may be distinct. Using this theory of relations and states, we can address the “profusion of states” problem (in MacBride 2014) in either of two ways and address the problem underlying the first puzzling conclusion (in MacBride 2022) as well. As we shall see, a precise theory of relations and states may leave certain identity questions open, just as the precise theory of sets ZFC leaves open certain identity questions. The solution in ZFC is not to conclude that its quantifiers can’t range over sets but to find and justify axioms that help decide the open questions within the precise, but extendable, framework ZFC provides (i.e., one that clearly quantifies over sets). Something similar happens in OT.
5 The Theory of Relations and States of Affairs
This section can be skipped by those familiar with OT since the material contained herein has been outlined and explained in a number of publications [e.g., Zalta (1983); -Zalta (1988); -Zalta (1993); Bueno, Menzel and Zalta (2014); Menzel and Zalta (2014); and others]. For those completely unfamiliar with it, OT may be sketched briefly by saying that it extends 2OL, not 2OL\(^{=}\), since identity isn’t taken as a primitive. OT adds to 2OL new atomic formulas of the form “\(xF\),” which represent a new mode of predication that can be read as “\(x\) encodes \(F\),” where “\(F\)” can be replaced by any unary predicate. Intuitively, “\(xF\)” expresses the idea that \(F\) is one of the properties by which we conceive and characterize an abstract, intentional object \(x\).31 OT also includes a distinguished unary relation constant “\(E!\)” for being concrete, a primitive necessity operator (\(\Box\)), and a defined possibility operator (\(\Diamond\)). OT then defines ordinary objects (“\(O!x\)”) as objects \(x\) that might exemplify concreteness and defines abstract objects (“\(A!x\)”) as objects \(x\) that couldn’t exemplify concreteness. It is axiomatic that ordinary objects necessarily fail to encode properties (\(O!x \to \Box \neg \exists FxF\)), though the theory allows that abstract objects can both exemplify and encode properties. It is also axiomatic that if \(x\) encodes a property, it necessarily does so (\(xF \to \Box xF\)).
But the key principle for abstract objects is the comprehension schema that asserts, for any condition (formula) \(\varphi\) in which \(x\) doesn’t occur free, that there exists an abstract object that encodes all and only the properties such that \(\varphi\):
(15) \(\exists x(A!x\mathbin{\&}\forall F(xF \equiv \varphi))\)
Here are some instances, expressed in technical English:
There exists an abstract object that encodes all and only the properties that \(y\) exemplifies. \(\exists x(A!x\mathbin{\&}\forall F(xF \equiv Fy))\)
There exists an abstract object that encodes just the property \(G\).
\(\exists x(A!x\mathbin{\&}\forall F(xF \equiv F = G))\)
There is an abstract object that encodes all the properties necessarily implied by \(G\).\(\exists x(A!x\mathbin{\&}\forall F(xF \equiv \Box \forall x(Gx \to Fx)))\)
There is an abstract object that encodes all and only the propositional properties constructed out of true propositions.
\(\exists x(A!x\mathbin{\&}\forall F(xF \equiv \exists p(p\mathbin{\&}F = [\lambda x p])))\)
And so on. Intuitively, for any group of properties you can specify to describe an abstract object, there is an abstract object that encodes just those properties and no others.
The other principles of this theory that will play an important role in what follows are the definitions of identity for individuals and the principles (existence and identity conditions) for relations. First, the theory of identity for individuals includes a definition stipulating that \(x\) and \(y\) are identical if and only if they are both ordinary objects that necessarily exemplify the same properties or they are both abstract objects that necessarily encode the same properties:
(16) \(x=y\equiv_{\mathit{df}}(O!x\:\&\:O!y\:\&\:\Box\forall F(Fx\equiv Fy))\lor(A!x\:\&\:A!y\:\&\:\Box\forall F(xF\equiv yF))\)
Second, the theory of relations consists of existence and identity conditions for relations. The existence conditions are derived since OT includes the resources of the relational \(\lambda\)-calculus; \(\lambda\)-expressions of the form \([\lambda x_{1}\ldots x_{n} \varphi]\) are well-formed, but only if \(\varphi\) doesn’t have any encoding subformulas.32 So (\(\lambda\)C), as stated above, is the main axiom governing \(\lambda\)-expressions. One can derive from (\(\lambda\)C) a modal version of (??). This theorem schema, (\(\Box\)CP), asserts existence conditions for relations as follows:33
Modal Comprehension for Relations (\(\Box\)CP) \(\exists F^{n}\Box \forall x_{1}\ldots \forall x_{n}(F^{n}x_{1}\ldots x_{n} \equiv \varphi)\), provided \(F\) doesn’t occur free in \(\varphi\) and \(\varphi\) doesn’t contain any encoding subformulas.
When \(n = 1\) and \(n = 0\), respectively, this principle asserts existence conditions for properties and states of affairs:
\(\exists F\Box \forall x(Fx \equiv \varphi)\), provided \(F\) doesn’t occur free in \(\varphi\) and \(\varphi\) doesn’t contain any encoding subformulas.
\(\exists p\Box (p \equiv \varphi)\), provided \(p\) doesn’t occur free in \(\varphi\) and \(\varphi\) doesn’t contain any encoding subformulas.
In other words, any formula free of encoding conditions can be used to produce a well-formed instance of (\(\Box\)CP). It is of some interest that there are still very small models of OT; for example, the smallest model involves one possible world, one ordinary object, two \(0\)-ary relations, two unary relations, two binary relations, etc., and four abstract objects. Though the models grow when OT is applied, minimal models show that without further axioms, the theory doesn’t commit one to much. Thus, relations, properties, and states of affairs exist under conditions analogous to those in classical, modal 2OL.34
The identity conditions for relations are stated by cases: (a) for properties \(F\) and \(G\), (b) for \(n\)-ary relations \(F\) and \(G\) (\(n\geq 2\)), and (c) for states of affairs \(p\) and \(q\). Identity for relations and states of affairs is defined in terms of identity for properties. The definitions are as follows:
- Properties \(F\) and \(G\) are identical if and only if \(F\) and \(G\) are necessarily encoded by the same objects, i.e.,
(17) \(F = G \ \equiv_{\mathit{df}} \ \Box \forall x(xF \equiv xG)\)
- \(n\)-ary relations \(F\) and \(G\) (\(n\geq 2\)) are identical just in case, for any \(n-1\) objects, every way of applying \(F\) and \(G\) to those \(n-1\) objects results in identical properties, i.e.,
(18) \(F=G\equiv_{\mathit{df}}\forall y_{1} \ldots
\forall y_{n-1} ([\lambda x\:Fxy_{1}\ldots y_{n-1}]=[\lambda
x\:Gxy_{1}\ldots y_{n-1}]\mathbin{\&}[\lambda x\:Fy_{1}xy_{2}\ldots
y_{n-1}]=[\lambda x\:Gy_{1}xy_{2}\ldots
y_{n-1}]\mathbin{\&}\ldots\mathbin{\&}\)
\([\lambda x\:Fy_{1}\ldots y_{n-1}x]=[\lambda
x\:Gy_{1}\ldots y_{n-1}x])\)
- States of affairs \(p\) and \(q\) are identical whenever (the property) being an individual \(z\) such that \(p\) is identical to (the property) being an individual \(z\) such that \(q\), i.e.,
(19) \(p=q\equiv_{\mathit{df}} [\lambda z\:p]=[\lambda z\:q]\)
From these definitions, it can be shown that the reflexivity of identity holds universally, i.e., that \(x = x\) is derivable from (16), that \(F = F\) is derivable from each of (17) and (18), and that \(p = p\) is derivable from (19). So OT asserts only the substitution of identicals as an axiom governing identity. It therefore has all the theorems about identity that are derivable in 2OL\(^{=}\). Identity is provably symmetric, transitive, etc., and since every term of the theory is interpreted rigidly, substitution of identicals holds in any (modal) context whatsoever.
Since (\(\lambda\)C) is an axiom of OT, the foregoing facts make it clear that (8) is also a theorem of OT, by the same reasoning used in the proofs given earlier in the paper. So as soon as one adds the hypothesis that a particular binary relation, say \(R\), is non-symmetric, OT also implies that \(R^{*} \neq R\). And so on for ternary relations. The multiplicity of relations is just a fact about both 2OL\(^{=}\) and OT when these systems are extended with the claim that non-symmetric relations exist. So taking relations and relation application as primitive still yields multiple converse relations for \(n\)-ary relations (\(n\geq 2\)). This is a consequence one should accept if we take relations and relation application as primitive and treat them as hyperintensional entities.35 This multiplicity isn’t egregious, in any case, for as we’ve seen, \(\lambda\)-expressions give us the expressive power to distinguish among the converses of (non-symmetric) relations. So let’s return to the questions about the identity of states of affairs to see how they fare with a precise theory of relations and states of affairs in hand.
6 Asserting the Identity of States
Recall that the puzzling conclusion reached in MacBride’s (2022) paper turned on the question of whether the states of affairs denoted by [9] and [10] are the same or distinct. This question can now be posed without discussing the converses of relations and without invoking 3OL. Let \(R\) be any symmetric relation, and let \(a\) and \(b\) be two particular and distinct objects. Then consider the states of affairs \(Rab\) and \(Rba\) (or, if you prefer, \([\lambda\:Rab]\) and \([\lambda\:Rba]\)). MacBride apparently has no doubt they are the same state. So let’s suppose they are, i.e., that \(Rab = Rba\). And let’s again grant him the ordinalized readings of relational claims. What happens to the argument in which he concludes that if we understand “\(Fab\)” in terms of ordinalized, higher-order properties, then “\(Rab\)” and “\(Rba\)” don’t express the same state of affairs? Answer: it has no force against the theory of states of affairs in OT. For in OT, all that is relevant to the truth of “\(Rab = Rba\)” is principle (19), i.e., the question of whether the properties \([\lambda z\:Rab]\) and \([\lambda z\:Rba]\) are identical, i.e., by (17), whether there might be objects that encode \([\lambda z\:Rab]\) without encoding \([\lambda z\:Rba]\) (or vice versa). Given these definitions, one could, should one wish to do so, simply use OT to assert, as an axiom, that when \(R\) is symmetric, \([\lambda z\:Rab]\) and \([\lambda z\:Rba]\) are identical, i.e., that no abstract object encodes \([\lambda z\:Rab]\) without also encoding \([\lambda z\:Rba]\), and vice versa.
Does this mean we don’t understand the open formula “\(Fab\)” or the quantified claim “\(\exists F(Fab)\)”? Not at all. First, the semantics of OT is perfectly precise on this score. Let “\(\mathbfit{a}\)” and “\(\mathbfit{b}\)” be the semantic names of the objects assigned to “\(a\)” and “\(b\).” Now consider some assignment \(f\) to the variables of the language, and suppose that “\(\mathbfit{R}\)” is the semantic name of the relation assigned to the variable “\(F\)” by \(f\). Then the open formula “\(Fab\)” is true relative to \(f\) if and only if the state of affairs \(\mathbfit{Rab}\) obtains.36 And “\(\exists F(Fab)\)” is true just in case some relation in the domain satisfies the open formula “\(Fab\),” no matter how that relation is specified.
Second, OT doesn’t require a formal semantics to be intelligible, just as ZF is intelligible when we express its primitive notions and axioms within first-order logic. The axioms and theorems of OT give us an understanding of the open formula “\(xF\)” and, in turn, give us an understanding of the identity conditions for states of affairs expressed in (19). To suggest otherwise would be like suggesting that we don’t understand “\(x\in y\).” This is a primitive of set theory; set identity is stated in terms of this primitive, in the form of the principle of extensionality. The more we work through the consequences of the axioms (i.e., the more theorems we prove in set theory), the better we understand “\(x\in y\).” Analogous observations hold with respect to OT. The formula “\(xF\)” is a primitive mode of predication, and the identity conditions for properties and relations are stated in terms of this primitive. The more we work through the consequences of the axioms, the better we understand this form of predication.
So if one is inclined to accept MacBride’s view that the states of affairs expressed by [9] and [10] are identical, one should then be inclined to accept the following general principle:
(20) \(\forall F\Box(\textit{Symmetric}(F)\to\forall x\forall y(Fxy=Fyx))\)
(20) is consistent with OT. We need not conclude that the open formula “\(Fab\)” is unintelligible or that the second-order quantifiers don’t range over relations. Instead, we make use of a theory of relations and states of affairs in which relation application is primitive but identity is defined. And we address the problem by asserting a principle, not by concluding that the language is unintelligible; indeed, it seems to be the principle that MacBride is relying upon to make his case.
This generalizes to non-symmetric relations. For recall the objection to (14), which is the claim:
(14) \(\forall F\Box (\textit{Non-symmetric}(F) \to \forall x\forall y(Fxy = F^{*}yx))\)
The problem with (14), according to MacBride, is to explain how different decompositions can give rise to the same state [-MacBride (2014), 4; quoted above]. But no such explanation is needed, since the identity of states of affairs is not a matter of decompositions and constituents. If \(F\) is non-symmetric, then the above principle implies, by definition (19), that \([\lambda z\:Fxy]=[\lambda z\:F^{*}yx]\), for any objects \(x\) and \(y\). That is consistent with OT.
Why does this address the difficulty in MacBride (2014, 4)? The answer: because we’re not attempting to explain how “distinct existences” (i.e., a non-symmetric relation \(F\), its converse \(F^{*}\), and objects \(x\) and \(y\)) can “give rise” to the same state; we’re instead proposing that one adopt a principle (indeed, a principle on which MacBride relies) that asserts that they do, without appealing to “decompositions,” “constituents,” etc. The definitions of identity for abstract objects (16) and for properties (17) place reciprocal bounds on the existence of these entities. The theory’s comprehension principle and identity conditions for abstract objects tell us that any (expressible) condition on properties can be used to define an abstract object. If we think of abstract objects as objects of thought or as logical objects, then the theory implies that if properties \(F\) and \(G\) are distinct, then there is a logical, abstract object of thought that encodes \(F\) and not \(G\) (and vice versa). And if \(F\) and \(G\) are identical, then no logical, abstract object of thought encodes \(F\) without encoding \(G\). So if the properties \([\lambda z\:Fxy]\) and \([\lambda z\:F^{*}yx]\) are identical, then no logical, abstract object of thought encodes the one without encoding the other.37
By adopting (14), one can use OT’s theory of identity for states of affairs to give a precise, theoretical answer to a philosophical question (“Under what conditions are states of affairs identical?”) which, if left unanswered, would leave one open to MacBride’s concerns about the intelligibility of 2OL and 2OL\(^{=}\).38
Before we turn, finally, to the intuition that states of affairs like those expressed by [9] and [10] are distinct, there is one final way to formulate the concern that MacBride has raised, given his understanding of the identity of states of affairs. Consider the property \([\lambda z\:Fzy]\), i.e., being an object \(z\) such that \(z\) and \(y\) exemplify \(F\). Now predicate that property of \(x\) to obtain the state of affairs \([\lambda z\:Fzy]x\), i.e., \(x\) exemplifies the property of being a \(z\) such that \(z\) and \(y\) exemplify \(F\). Put this aside for the moment and now consider the property \([\lambda z\:Fxz]\), i.e., being an object \(z\) such that \(x\) and \(z\) exemplify \(F\). Now predicate that property of \(y\) to obtain the state of affairs \([\lambda z\:Fxz]y\), i.e., \(y\) exemplifies the property of being a \(z\) such that \(x\) and \(z\) exemplify \(F\). Now, we might ask:
(A) What is the relationship between the states of affairs \(Fxy\), \([\lambda z\:Fzy]x\), and \([\lambda z\:Fxz]y\)—are they all the same or are they all pairwise distinct?
If you accept MacBride’s view about the identity of states of affairs, then you would answer (A) by adopting the following principles:
(21) \(\forall F\Box(Fxy=[\lambda z\:Fzy]x)\)
(22) \(\forall F\Box([\lambda z\:Fzy]x=[\lambda z\:Fxz]y)\)
From these principles, it also follows, by the transitivity of identity, that \(\forall F\Box(Fxy=[\lambda z\:Fxz]y)\).
I’m not suggesting that this is the only or best answer to (A) because there may be contexts where one might wish to distinguish these states of affairs (see the next section). But the general point is clear. Some precise, axiomatized theories leave open certain questions of identity, and those questions can be answered by looking for principles rather than questioning whether the quantifiers of the theory range over the entities being axiomatized. ZFC has precise identity conditions for sets but leaves open the Continuum Hypothesis (“CH”), and yet we can still interpret the quantifiers in set theory as ranging over sets. CH can be formulated as the claim \(2^{\aleph_0}=\aleph_{1}\), and though CH and its negation are consistent with ZFC, we don’t give up the interpretation of the quantifiers of ZFC as ranging over sets just because CH is an open question; instead, we look for axioms that will help decide the issue. The same applies to the theory of relations.39
As it turns out, there is an alternative way to respond to the problems MacBride has raised. It may be of interest to some readers to consider what happens to his arguments if one instead asserts that \(Fxy\neq Fyx\) when \(F\) is symmetric, or accepts that \(Fxy\neq F^{*}yx\) when \(F\) is non-symmetric, or generally accepts that \(Fxy\neq[\lambda z\:Fxz]y\neq[\lambda z\:Fzy]x\). In the final section, then, I show that, with OT’s theory of states of affairs,
one may alternatively assert these non-identities;
one can account for the intuition that there is one part of the world that makes these distinct states true when they are true; and, consequently,
one can disarm the worry about a “profusion” of states of affairs and clear the path for understanding the quantifiers of 2OL and 2OL\(^{=}\) as quantifying over relations.
7 Distinct States, One Situation
What is driving MacBride’s certainty that (a) \(Fxy = Fyx\) when \(F\) is symmetric, (b) \(Fxy = F^{*}yx\) when \(F\) is non-symmetric, and (c) \(Fxy=[\lambda z\:Fxz]y=[\lambda z\:Fzy]x\) generally? The argument is most clearly stated for the case of non-symmetric relations, where he argues that if non-symmetric relations have distinct converses, then we end up with “a profusion of states of affairs.” We laid out the argument in section 4, in the quote from (2014, 4), about there being one state of affairs (i.e., one cat-mat orientation) despite there being two kinds of undertakings (putting the cat on the mat and putting the mat under the cat). Since to undertake to do something is to attempt to bring about a state of affairs, one might then conclude that there are two distinct undertakings precisely because there are two distinct states of affairs to be brought about. But, as we saw earlier, MacBride and Fine both conclude that there is only one state and that to claim otherwise is counterintuitive. And we saw that the concern is that converse relations commit us to the principle that if \(F\) is non-symmetric, then the state of affairs \(Fxy\) is distinct from the state of affairs \(F^{*}yx\). We have formally represented the principle that concerns them as follows:
(13) \(\forall F\Box(\textit{Non-symmetric}(F)\to\forall x\forall y(Fxy\neq F^{*}yx))\)
But notice that the cases MacBride (and Fine) discuss involve necessarily non-symmetric relations, such as on, on top of, above, etc. So when we instantiate (13) to a necessarily non-symmetric relation, say \(R\), it would follow by the K axiom of modal logic that \(\Box \forall x\forall y(Rxy \neq R^{*}yx)\). But of course, we can also infer, from the fact that (\(\lambda\)C) is a universal, necessary truth, that \(\Box \forall x\forall y(Rxy \equiv R^{*}yx)\).40 So we can generalize to conclude that whenever we assert that \(R\) is a necessarily non-symmetric relation, (\(\lambda\)C) and (13) combine to ensure that \(Rxy\) and \(R^{*}yx\) are necessarily equivalent but distinct states of affairs, for any values of the variables \(x\) and \(y\).
The real problem is now laid bare: the hyperintensionality of states of affairs appears to undermine the intuition that in these cases, there is only one piece of the world (e.g., one cat-mat orientation) that accounts for the truth of the relational claims “\(Rab\)” and “\(R^{*}ba\)” when they are true. Note that this same problem arises for the other cases we’re considering. I take it MacBride would similarly be concerned about the following principle regarding symmetric relations:
(23) \(\forall F\Box(\textit{Symmetric}(F)\to\forall x\forall y(Fxy\neq Fyx))\)
And the concern extends generally to principles such as the following, which would govern every binary relation:
(24) \(\forall F\Box\forall x\forall y(Fxy\neq[\lambda z\:Fzy]x)\)
(25) \(\forall F\Box\forall x\forall y([\lambda z\:Fzy]x\neq[\lambda z\:Fxz]y)\)
In each case, a “profusion” of states of affairs will arise, for it can be shown (a) that (\(\lambda\)C) and (23) imply that for any necessarily symmetric relation \(R\), \(Rxy\) and \(Ryx\) are necessarily equivalent but distinct;41 and (b) that (\(\lambda\)C), (24), and (25) imply that for any relation \(R\), the states \(Rxy\), \([\lambda z\:Rxz]y\), and \([\lambda z\:Rzy]x\) are all pairwise necessarily equivalent but all pairwise distinct.42
So if one accepts (13) and (23)–(25), can we account for the intuition that there is only one piece of the world in virtue of which the necessarily-equivalent-but-distinct states of affairs are true when they are true? To answer this question, we shall not invoke “decompositions” and “constituents,” for the identity for states of affairs is given by (19). But we can address the intuition driving MacBride, Fine, and no doubt others, by appealing to the notion of a situation and defining the conditions under which a state of affairs \(p\) obtains in a situation \(s\) (i.e., the conditions under which \(s\) makes \(p\) true). Once these notions are defined, we can identify, for any state of affairs \(p\), a canonical situation \(s\) in which obtain all and only the states of affairs necessarily implied by \(p\). Then, the canonical situation in which obtain the states necessarily implied by \(Rab\) will be identical to the canonical situation in which obtain the states necessarily implied by \(R^{*}ba\); this will follow from the fact that \(Rab\) and \(R^{*}ba\) are necessarily equivalent. And similar results follow for states arising from necessarily symmetric relations and for the states \(Rab\), \([\lambda x\:Rxb]a\), and \([\lambda x\:Rax]b\). As I develop this response, I’ll use \(R\) as an arbitrary binary relation, which is necessarily non-symmetric, or symmetric, or unspecified, as the case may be.
In OT (Zalta 1993, 410), situations are defined as abstract objects that encode only properties constructed out of states of affairs, i.e., encode only properties \(F\) of the form \([\lambda z\:p]\), where \(p\) ranges over states of affairs:
(26) \(\mathit{Situation}(x) \equiv_{\mathit{df}} A!x\mathbin{\&}\forall F(xF \to \exists p(F = [\lambda z\:p]))\)
A situation, thus defined, is not a mere mereological sum because encoding is a mode of predication; a situation is therefore characterized by the state-of-affairs properties of the form \([\lambda z\:p]\) that it encodes. In addition, a state of affairs \(p\) obtains in a situation \(s\) (“\(s\models p\)”) just in case \(s\) encodes being a \(z\) such that \(p\) (Zalta 1993, 411):
(27) \(s\models p \equiv_{\mathit{df}} s[\lambda z\:p]\)
In what follows, therefore, we sometimes extend the notion of encoding by saying that \(s\) encodes a state of affairs \(p\), or that \(s\) makes \(p\) true, whenever \(p\) obtains in \(s\). That is, when \(s\models p\), we can say either \(s\) encodes \([\lambda z\:p]\), or \(s\) encodes \(p\), or \(s\) makes \(p\) true.
Now consider some state of affairs, say \(Rab\). Given the foregoing definitions, OT implies that there exists a situation \(s\) such that a state of affairs \(p\) obtains in \(s\) if and only if \(p\) is necessarily implied by \(Rab\). To see this, note that the comprehension principle for abstract objects asserts that there is an abstract object that encodes exactly those properties \(F\) such that \(F\) is a property of the form \([\lambda z\:p]\) when \(p\) is some state of affairs necessarily implied by \(Rab\):
(28) \(\exists x(A!x\mathbin{\&}\forall F(xF \equiv \exists p(\Box (Rab\to p)\mathbin{\&}F = [\lambda z\:p])))\)
Let \(s_{1}\) be such an object, so that we know:
(29) \(A!s_{1}\mathbin{\&}\forall F(s_{1}F \equiv \exists p(\Box (Rab\to p)\mathbin{\&}F = [\lambda z\:p]))\)
Since \(s_{1}\) is abstract and every property it encodes is a property of the form \([\lambda z\:p]\), it follows that \(s_{1}\) is a situation by definition (26). Moreover, the theory implies that \(s_{1}\) is unique, i.e., that any abstract object that encodes all and only those states of affairs necessarily implied by \(Rab\) is identical to \(s_{1}\). Since situations are abstract objects, they are identical whenever they encode the same properties.43 And since situations, by (26), encode only properties \(F\) such that \(\exists p(F = [\lambda z\:p])\), they obey the principle: \(s\) and \(s'\) are identical just in case the same states of affairs obtain in \(s\) and \(s'\) (Zalta 1993, 412, Theorem 2). So there can’t be two distinct abstract objects that encode all and only the states of affairs necessarily implied by \(Rab\). Since (28) has a unique witness, we may treat \(s_{1}\) as a name of this witness (introduced by definition) and treat (29) as a fact about \(s_{1}\) implied by the definition.
Two modal facts about \(s_{1}\) become immediately relevant:
- A state of affairs obtains in \(s_{1}\) if and only if it is necessarily implied by \(Rab\), i.e.,
(30) \(\forall p(s_{1}\models p\,\equiv\,\Box(Rab\to p))\).
- \(s_{1}\) is modally closed in the following sense: for any states of affairs \(p\) and \(q\), if \(p\) obtains in \(s_{1}\) and \(p\) necessarily implies \(q\), then \(q\) obtains in \(s_{1}\), i.e.,
(31) \(\forall p\forall q((s_{1}\models p)\mathbin{\&}\Box(p\to q)\to(s_{1}\models q))\).
The proof of (30) is straightforward and, interestingly, relies on the object-theoretic definition for the identity for states of affairs (19).44 Note that it immediately follows from (30) that \(Rab\) obtains in \(s_{1}\), since \(\Box (Rab \to Rab)\) is an instance of the modal principle \(\forall p\Box (p \to p)\). The proof of (31) relies on both the definition of identity for states of affairs (19) and the fact that necessary implication is transitive, i.e., the fact that:
- \(\forall p\forall q\forall r(\Box(p\to q)\mathbin{\&}\Box(q\to r)\to\Box(p\to r))\)
The proof of (31) is left to a footnote.45
It is an immediate consequence of (30) that:
if \(R\) is necessarily non-symmetric, then \(R^{*}ba\) obtains in \(s_{1}\), for it is necessarily equivalent to, and so necessarily implied by, \(Rab\);
if \(R\) is necessarily symmetric, then \(Rba\) obtains in \(s_{1}\), for it is necessarily equivalent to, and so necessarily implied by, \(Rab\); and
if \(R\) is any binary relation whatsoever, then \([\lambda x\:Rxb]a\) and \([\lambda x\:Rax]b\) both obtain in \(s_{1}\), since these are both necessarily equivalent to, and so necessarily implied by, \(Rab\).
Moreover, when \(R\) is necessarily non-symmetric, it follows that neither \(Rba\) nor \(R^{*}ab\) obtain in \(s_{1}\), since neither is necessarily implied by \(Rab\) in that case.
It is interesting to observe that in each of the above scenarios, any one of the necessarily equivalent states of affairs in question can be used to define the unique situation in which they all obtain. The resulting situations become identified, since it is a theorem of modal logic that necessarily equivalent states of affairs necessarily imply the same states of affairs:
(32) \(\forall p\forall q(\Box(p\equiv q)\to\forall r(\Box(p\to r )\equiv\Box(q\to r)))\)
To see why this fact helps us to show that the resulting situations are all identified, consider the case of necessarily non-symmetric \(R\) and consider the situation that can be introduced in a manner similar to \(s_{1}\) but with \(R^{*}ba\) instead of \(Rab\):
\(\exists x(A!x\mathbin{\&}\forall F(xF\equiv\exists p(\Box(R^{*}ba\to p)\mathbin{\&}F=[\lambda z\:p])))\)
This is the (provably unique) situation that makes all and only the states of affairs necessarily implied by \(R^{*}ba\) true. Call this \(s_{2}\). Clearly, facts analogous to (30) and (31) hold for \(s_{2}\): a state of affairs \(p\) obtains in \(s_{2}\) if and only if \(R^{*}ba\) necessarily implies \(p\), and \(s_{2}\) is modally closed.
But OT implies that \(s_{1} = s_{2}\).46 Moreover, the reasoning in the proof applies to all the other canonical situations definable in terms of the necessarily equivalent states of affairs mentioned above: these canonical situations are pairwise identical. Thus, in each example, there is a single canonical situation in which all of the states of affairs mentioned in the example obtain.
Finally, to account for the intuition that the situation in which the necessarily equivalent states obtain is part of the actual world, we turn to the principles (theorems and definitions) governing part of, actual situations, and possible worlds. Since “\(x\) is a part of \(y\)” is defined as \(\forall F(xF\to yF)\), it follows that a situation \(s\) is part of a situation \(s'\) (\(s\unlhd s'\)) just in case every state of affairs that obtains in \(s\) also obtains in \(s'\) (Zalta 1993, 412, Theorem 4). Moreover, an actual situation is a situation \(s\) such that every state of affairs that obtains in \(s\) obtains simpliciter (Zalta 1993, 413). And a possible world is a situation \(s\) that might be such that it makes true all and only the truths (Zalta 1993, 414). Formally:
\(s\unlhd s'\equiv\forall p(s\models p\to s'\models p)\)
\(Actual(s)\equiv_{\mathit{df}}\forall p(s\models p\to p)\)
\(\mathit{PossibleWorld}(s)\equiv_{\mathit{df}}\Diamond\forall p(s\models p\equiv p)\)
OT then yields, as theorems (1993, Theorem 18 and 19):
There is a unique actual world, i.e.,
\(\exists !s(\mathit{PossibleWorld}(s)\mathbin{\&}\mathit{Actual}(s))\)(“\(w_{\alpha}\)”)
Every actual situation is a part of the actual world, i.e.,
\(\forall s(\mathit{Actual}(s)\to s\unlhd w_{\alpha})\)
The proof of the first theorem rests on the fact that there is a unique situation that encodes all and only the states of affairs that obtain, i.e., there is a unique situation \(s\) such that all and only the states that obtain in \(s\) are states that obtain simpliciter.47
So the canonical situations that exist in each of the examples validate the following claims:
When \(R\) is necessarily non-symmetric and \(Rab\) obtains, there is a unique situation that (a) encodes all and only the states of affairs necessarily implied by \(Rab\), (b) is actual, (c) is a part of the actual world, and (d) makes both \(Rab\) and \(R^{*}ba\) true.
When \(R\) is necessarily symmetric and \(Rab\) obtains, there is a unique situation that (a) encodes all and only the states of affairs necessarily implied by \(Rab\), (b) is actual, (c) is a part of the actual world, and (d) makes both \(Rab\) and \(Rba\) true.
When \(R\) is any binary relation and \(Rab\) obtains, there is a unique situation that (a) encodes all and only the states of affairs necessarily implied by \(Rab\), (b) is actual, (c) is a part of the actual world, and (d) makes \(Rab\), \([\lambda x\:Rxb]a\), and \([\lambda x\:Rax]b\) true.
This addresses the intuition that served as the obstacle to treating states of affairs as hyperintensional entities. It lays to rest the claim that we don’t understand the open formula “\(Fab\)” and the claim that we can’t interpret the quantifier in “\(\exists F(Fab)\)” as ranging over relations.
The foregoing analysis therefore preserves the conclusion that Russell developed concerning non-symmetric relations when he said (1903, sec. 219) regarding the terms greater and less:
These two words have certainly each a meaning, even when no terms are mentioned as related by them. And they certainly have different meanings, and are certainly relations. Hence if we are to hold that “\(a\) is greater than \(b\)” and “\(b\) is less than \(a\)” are the same proposition, we shall have to maintain that both greater and less enter into each of these propositions, which seems obviously false.
One might reframe Russell’s point by noting that if non-synonymous relational expressions signify or denote different relations, then the simple statements we can make using those expressions signify different states of affairs. That principle has been preserved, without sacrificing any contrary intuitions.
8 Conclusion
I think relations and predication are so fundamental that they cannot be analyzed in more basic terms. They can only be axiomatized, and the most elegant formalism we have for doing so is the language of 2OL. The suggestion that the quantifiers of 2OL can’t range over relations doesn’t get any purchase against OT. The latter is a friendly extension of 2OL and provides 2OL with the additional expressive power needed to assert a precise theory of relations and states of affairs that includes plausible existence and identity conditions for these entities. OT therefore offers a natural formalism for intelligibly quantifying over relations and states of affairs and thus provides a deeper understanding of the open and quantified formulas of 2OL. So the suggestion that the quantifiers of 2OL can’t be interpreted as ranging over relations fails to engage with at least one theory that shows that they can and, without any heroic measures, do.