Skip To Content
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
Dialectica 77(2)

Weyl, Gödel and the Grundlagenstreit

·
    Abstract

    The present paper provides a reconstruction of Weyl’s and Gödel’s interpretations of intuitionism, embedding the discussion in the context of the Grundlagenstreit and the origins of constructive logic. The two interpretations exhibit a striking affinity and deviate substantially from the mainstream view, usually referred to as the Brouwer-Heyting-Kolmogorov explanation of constructive proofs. Gödel’s objections to intuitionism are fairly well-known, but the connection with Weyl appears to have received little attention from commentators. The crux of the matter is the concept and role of ideal elements in mathematics. The paper explains how different interpretations of intuitionism deal with this problem.

    The mainstream view on constructive semantics, as codified by the Brouwer-Heyting-Kolmogorov (BHK) interpretation of logical constants, has not always gone unchallenged. Gödel—and Weyl before him—had quite different opinions on the way constructivism is to be understood. In particular, instead of laying the blame on the law of excluded middle, they focused on the logic of quantification and posed heavy restrictions on the interaction between quantifiers and propositional connectives, particularly negation. Both authors relied heavily on choice functions. In the following, we outline Weyl’s and Gödel’s main ideas, comparing them with each other and with mainstream constructive logics. We also sketch some important connections with Hilbert, as well as some puzzles around intuitionistic reductio ad absurdum.

    1 Weyl on Constructivity

    In Weyl (1918), Hermann Weyl had already published a book-length attempt of his own to build analysis on a predicative basis, but in Weyl (1921), he modified his former views and adhered to Brouwer’s intuitionism, although with some substantial differences. What is of interest to us in this important paper is his theory of quantification, which diverges from Brouwer’s views and from what has now become mainstream in constructive logic. We shall also refer to Weyl (2009) (whose original version was published in 1928) to reconstruct Weyl’s approach. His theory could be broken down into the following principles:1

    Actuality. An existential statement can only be asserted when an instance has been found.

    Abstraction. Existential quantifiers build no real statements but only statement abstracts (Urteilsabstrakte), or abstracts for short. Universal quantifiers build statement instructions (Urteilsanweisungen).

    Constrained Inference. No inferences can be drawn from existential abstracts. No logical inferences can lead to a statement instruction.

    Negation. Negation cannot be applied to statement abstracts or instructions.

    Quantifier Nesting. An existential quantifier cannot occur in the scope of a universal quantifier.

    Propositional Innocence. Classical propositional logic is not to be blamed for non-constructivity, for the latter arises from the logic of quantification over infinite domains.

    The basis of Weyl’s claims is undoubtedly the principle of actuality: constructively, so the principle goes, I cannot assert an existence statement based on the mere possibility of having a construction that I do not actually possess:2

    Nur die gelungene Konstruktion kann uns die Berechtigung dazu geben; von Möglichkeit ist nicht die Rede. (Weyl 1921, 55, original emphasis)

    From this, Weyl concludes that existential statements are no real statements because no state of affairs (Sachverhalt), hence no independent meaning, is attached to them without the proof construction which has already taken place. Weyl formulates this in rather colourful tones:

    Man muß solche Dinge nicht von außen erwägen, sondern sich innerlich ganz zusammenraffen und ringen um das “Gesicht”, die Evidenz. Endlich fand ich für mich das erlösende Wort. Ein Existentialsatz — etwa “es gibt eine gerade Zahl” — ist überhaupt kein Urteil im eigentlichen Sinne, das einen Sachverhalt behauptet. Existential-Sachverhalte sind eine leere Erfindung der Logiker. “\(2\) ist eine gerade Zahl”: das ist ein wirkliches, einem Sachverhalt Ausdruck gebendes Urteil; “es gibt eine gerade Zahl” ist nur ein aus diesem Urteil gewonnenes Urteilsabstrakt. (Weyl 1921, 54, original emphasis)

    Universally quantified statements, on the other hand, are general instructions on how to build real statements. Given the general statement \(m+1=1+m\), it can be transformed by a uniform principle into a special case, e.g., \(9+1=1+9\):

    Auch eine allgemeine Aussage weist nicht auf einen an sich bestehenden Sachverhalt hin, sie ist nicht gemeint als logisches Produkt unendlich vieler Einzelaussagen, sondern hypothetisch: angewandt auf eine einzelne bestimmte vorliegende Zahl liefert sie ein bestimmtes Urteil. (Weyl 2009, 72–73)

    Therefore one could think of the application of statement instructions as a sort of conversion in lambda calculus:

    \[(\forall xA(x))(a)\rhd A(a).\]

    Since the instruction has to hold for all objects in the domain, for its application it is not required to exhibit a previously constructed object \(a\), but we can simply use any denoting name. In Gentzenian terms, this justifies the elimination rule

    \[\forall xA(x)\Rightarrow A(a).\]

    The restriction on existential abstracts, on the other hand, clearly justifies the introduction rule:

    \[A(a)\Rightarrow\exists xA(x).\]

    In fact, these are the only quantifier rules that Weyl gives in Weyl (2009, 32). Such is, then, the principle of constrained inference. Hence the questions arise:

    1. How do we draw consequences from an existential abstract?

    2. How do we establish a universal statement instruction?

    Weyl’s answer to the first question is simply that we do not. We cannot draw conclusions from a pseudo-statement. Whenever we want to infer a conclusion from \(\exists xA(x)\), we have to resort to the statement \(A(a)\) that we have established, and draw our inferences from that statement (cf. Weyl 2009, 72). As to the second question, Weyl seems to be saying that general sentences are always arrived at through domain-specific axioms, never by logic alone. In the case of natural numbers, for example, the method of attaining universal sentences will be mathematical induction, together with the generality provided by definitions (Weyl 2009, 72). In spite of the duality between existential and universal quantifier, which is reflected in the above rules, Weyl does not assign the same status to abstracts and instructions. Instructions contain implicitly infinitely many real statements, hence they are different from abstracts, which are pseudo-statements (cf. Weyl 1921, 56).

    Based on the previous principles, Weyl can now limit the applicability of negation to quantified sentences. An abstract cannot be negated because it is a pseudo-statement: just like one cannot draw inferences from it, one cannot negate it either. In particular, the forbidden \(\lnot\exists xA(x)\) can be written legitimately as \(\forall x\lnot A(x)\). For Weyl, the existential quantifier is mathematically and logically idle. The extension of the same principle to the universal quantifier is prima facie puzzling. Weyl claims:

    Die Negation einer allgemeinen Aussage über Zahlen wäre ein Existentialsatz; da dieser nichtssagend ist, sind die allgemeinen Urteile nicht negationsfähig. (Weyl 2009, 72)

    Here he seems to be appealing to the law \[\lnot\forall xA(x)\Rightarrow \exists x\lnot A(x),\] which is constructively invalid. It is probably the case that Weyl has classical negation in mind, and the argument is meant to show that this negation does not apply (see below). Weyl concludes that the law of excluded middle fails for quantified sentences, since it cannot even be formulated (cf. Weyl 1921, 56).

    The next principle that we need to examine is that of quantifier nesting (cf. Weyl 1921, 57). According to Weyl, if we have proved \(\forall xA(x,a)\), then we can abstract legitimately and obtain \(\exists y\forall xA(x,y)\). If, on the other hand, an instruction is to be a rule that can be applied to all objects of the domain to yield a real or proper statement: \[(\forall xA(x))(a)\rhd A(a),\] then it is clear that we cannot have the situation in which the result is a pseudo-statement: \[(\forall x\exists yA(x, y))(a)\rhd\exists yA(a,y),\] whence the scope restriction: an existential quantifier cannot occur within the scope of a universal quantifier. Weyl’s way out consists in interpreting \(\forall x\exists yA(x,y)\) as an abstract of an instruction rather than the opposite, i.e., by using what would later come to be known as Skolem functions \(f\) such that \(\exists f\forall xA(x,f(x))\). This interpretation allows a legitimate conversion: \[(\forall xA(x,f(x)))(a)\rhd A(a,f(a)).\] The principle of nesting reflects again the asymmetry between \(\forall\) and \(\exists\) sentences.

    Finally, it is implicit in Weyl’s analysis that the failure of constructivity is due to quantification over infinite totalities rather than propositional logic. It is only when classical negation is applied to quantifiers, that constructivity is violated. But classical negation itself, within the scope of propositional logic, is by itself harmless. We call this the principle of propositional innocence. That this is actually at work in Weyl’s conception, can be seen from the fact that Weyl (2009) defines propositional connectives by means of truth tables (cf. Weyl 2009, 30). It is important to stress, however, that while all other principles are clearly stated by Weyl, the principle of propositional innocence is my own extrapolation and remains therefore hypothetical.3

    Summarizing, for Weyl the existential quantifier is mathematically and logically idle, whereas the universal quantifier is mathematically not idle (since statement instructions are proved by means of mathematical definitions and axioms), and logically idle to a lesser degree (as statement instructions imply infinitely many proper statements). But in both cases, the crucial quantifier rules that are subject to parameter restrictions have no place in deduction. In particular, quantified sentences cannot be meaningfully negated. Weyl is silent as to the question whether other logical connectives can be applied to quantified sentences, presumably because their impact is not as crucial.

    2 Hilbert’s Programme

    Weyl’s paper had a profound influence on Hilbert and the formulation of his programme. Hilbert held Weyl in high esteem and was deeply upset by his allegiance to intuitionism. Accordingly, he took Weyl’s challenge seriously, as testified by Hilbert (1922), where his words echo Weyl’s arguments closely:

    Bei unendlich vielen Dingen hat die Negation des allgemeinen Urteils \(\forall xA(x)\) zunächst gar keinen präzisen Inhalt, ebensowenig wie die Negation des Existentialurteils \(\exists xA(x)\). Allerdings können gelegentlich diese Negationen einen Sinn erhalten, nämlich, wenn die Behauptung \(\forall xA(x)\) durch ein Gegenbeispiel widerlegt wird oder wenn aus der Annahme \(\forall xA(x)\) bzw. \(\exists xA(x)\) ein Widerspruch abgeleitet wird. Diese Fälle sind aber nicht kontradiktorisch entgegengesetzt; denn wenn \(A(x)\) nicht für alle \(x\) gilt, wissen wir noch nicht, daß ein Gegenstand mit der Eigenschaft Nicht-\(A\) wirklich vorliegt; ebensowenig dürfen wir ohne weiteres sagen: entweder gilt \(\forall xA(x)\) bzw. \(\exists xA(x)\) oder diese Behauptungen weisen einen Widerspruch wirklich auf. Bei endlichen Gesamtheiten sind “es gibt” und “es liegt vor” einander gleichbedeutend; bei unendlichen Gesamtheiten ist nur der letztere Begriff ohne weiteres deutlich. (Hilbert 1922, 155–156)

    This passage shows that Hilbert had taken up many of Weyl’s views, and that the negation that Hilbert had in mind is classical negation. The distinction between different kinds of negation is brought to clarity in the later treatise Hilbert and Bernays (1934, 33–34), where the example is given of an elementary arithmetic statement, say \(f(m)=n\), whose contradictory negation is another statement to the effect that \(f(m)=k\), with \(n\neq k\). Here we have two claims on the result of a given procedure. They contradict each other exactly in the sense that they only deviate in the claimed result, but they coincide in the basic procedure. Now consider an existential statement. If we say that there is no \(n\) such that \(A(n)\), we cannot mean it in the mild sense (in unscharfem Sinne) that such \(n\) is not available, but rather in the sense that it cannot have the property \(A\). Hilbert and Bernays call this a sharpened negation (verschärfte Negation). From a finitary standpoint, the mild or unsharp negation is the exact contradictory of an existential statement, because it lies at the same epistemological level of the negated statement (available/not available; exists/does not exist), whereas the sharpened negation works at an entirely different level, that is, that of general laws:

    Die Existentialaussage und ihre verschärfte Negation sind nicht, wie eine elementare Aussage und ihre Negation, Aussagen über die beiden allein in Betracht kommenden Ergebnisse einer und derselben Entscheidung, sondern sie entsprechen zwei getrennten Erkenntnismöglichkeiten, nämlich einerseits der Auffindung einer Ziffer von einer gegebenen Eigenschaft, andererseits der Einsicht in ein allgemeines Gesetz über Ziffern. Daß eine von diesen beiden Möglichkeiten sich bieten muß, ist nicht logisch selbstverständlich. (Hilbert and Bernays 1934, 33, original emphasis)

    Thus the lack of a contradictory negation causes the law of excluded middle to fail. The same holds true of general statements: we cannot assume, from a finitistic point of view, that either \(A(x)\) is true of all \(x\) or that an \(x\) can be found that is not \(A\) (cf. Hilbert and Bernays 1934, 34).

    There is an underlying agreement with Weyl on the problem of applying negation to quantified statements, which ultimately does not translate, however, in Weyl’s prohibition. In fact, the doctrine of quantifiers does not obey Weyl’s principle of abstraction: existential sentences do not express pseudo-statements, but only statements with partial information (Partialurteile). On the other hand, it is clear that Hilbert and Bernays fully subscribed to the principle of propositional innocence, for the source of infinitary reasoning was supposed to be quantification.

    As in Weyl, Hilbert’s method of dealing with quantifiers is based on choice functions. His initial approach made use of the \(\tau\) term-forming operator, such that \(\tau_{x}A(x)\) is to be interpreted as “the least likely to be \(A\)”.4 The corresponding axiom is \[A(\tau_{x}A(x))\Rightarrow A(x).\] He was soon to change this by adopting a dual operator \(\varepsilon\), with \(\varepsilon_{x}A(x)\) to be read as “the most likely to be \(A\)”, ruled by the axiom \[A(x)\Rightarrow A(\varepsilon_{x}A(x)).\] \(\varepsilon\) behaves as a choice function, as the following example illustrates: \[A(y_{1},\ldots,y_{n},x)\Rightarrow A(y_{1},\ldots,y_{n},\varepsilon_{x}A(y_{1},\ldots,y_{n},x)).\] With the \(\varepsilon\) operator, defining the quantifiers becomes easy: \[\exists xA(x)\Leftrightarrow A(\varepsilon_{x}A(x))\] \[\forall xA(x)\Leftrightarrow A(\varepsilon_{x}\lnot A(x)).\] The latter amounts to \[\forall xA(x)\Leftrightarrow A(\tau_{x}A(x)),\] since \[\varepsilon_{x}\lnot A(x)=\tau_{x}A(x),\] (the least likely to be \(A\) is the most likely to be \(\lnot A\)), from which follows \[A(\varepsilon_{x}\lnot A(x))\Leftrightarrow A(\tau_{x}A(x)).\] From these definitions, the full rules of quantification can be deduced (cf. Hilbert and Bernays 1939).

    In summary, Hilbert and Bernays concluded that quantification, when applied to infinite domains, is devoid of clear meaning, but instead of giving up classical laws, they set out to prove that infinitary (or ideal) methods can be justified indirectly, by proving the consistency of the system. The sense in which a consistency proof solves the problem is explained as follows. Given the lack in semantic transparency of infinitary mathematics, it is theoretically possible that some infinitary results be shown to be invalid by finitary methods, in analogy to the discovery of the set-theoretic antinomies. But if a consistency proof has been established, such contradiction between different methods can never take place (cf. Hilbert and Bernays 1934, 42). Thus Hilbert ultimately rejected all of Weyl’s principles apart from the principle of propositional innocence.

    3 Negation and Quantification

    We have seen that the net effect of Weyl’s approach was to adopt classical propositional logic and curtail the logic of quantification. Such diagnosis of constructivity was very much at variance with Brouwer’s focus on negation in general, not limited to quantifiers.5 In keeping with Brouwer’s conception, the essence of Heyting’s formulation of intuitionistic logic was the rejection of the principle of propositional innocence, which in one stroke made it possible to lift all other prohibitions imposed by Weyl.6 With the notion of falsity as reductio ad absurdum, negation could be applied to quantified sentences as well, and the excluded middle failed on all sentences. In formalistic terms, this amounts to identifying the source of non-constructive methods with propositional logic, whereas the logic of quantification is now thought to be innocent and can coincide with that of classical logic. As we shall see shortly, this is not, strictly speaking, the case, for even quantifiers are interpreted differently.

    The problem now was the exact meaning of negation. Just by refusing classical negation and defining \(\lnot A\coloneqq A\Rightarrow\bot\), we still do not have a clear answer as to what to do with absurdity \(\bot\). The principle on which everyone agreed was constructive reductio ad absurdum: \((A\Rightarrow B)\Rightarrow((A\Rightarrow\lnot B)\Rightarrow\lnot A)\), but the main point of controversy lay in the ex falso sequitur quodlibet law: \(\bot\Rightarrow B\). Constructively, it was implicit in Brouwer’s conception that in order to justify a hypothetical judgement one has to provide at least a method that transforms the antecedent into the consequent, and in the case of ex falso it was not prima facie clear what such a method could be. Before Heyting, Kolmogorov (1925) had defined a version of Brouwer’s logic without the ex falso law, since he thought that this rule has no “intuitive foundation” (Kolmogorov 1925, 419). Kolmogorov also provided the first double-negation translation with classical logic, but his work did not achieve wide circulation. A logical system with the positive fragment and constructive reductio which does not contain the ex falso is now called minimal logic, after Johansson (1937). Although the semantic justification of the ex falso was at first unclear, ultimately Heyting’s acceptance of this principle, apparently with the tacit agreement of Brouwer, became influential. Under Heyting’s suggestion, Glivenko (1929) had included the ex falso in his axiom system, and the final detailed version of intuitionistic logic, also featuring ex falso, was published as Heyting (1930). It is telling that all these works focused on propositional logic.

    Kolmogorov (1932) provided a semantic justification of ex falso in terms of problems and their solution. In general, for Kolmogorov a proposition \(A\) represents a problem, and a proof of \(A\) represents its solution. What is, then, the status of a problem that we know unsolvable? Kolmogorov considers the following problem: under the assumption that the number \(\pi\) is rational, prove that also the number \(e\) can be expressed as a rational number. He remarks that

    Die Voraussetzung der […] Aufgabe [ist] unmöglich, und folglich ist die Aufgabe selbst inhaltslos. Der Beweis, daß eine Aufgabe inhaltslos ist, wird weiter immer als ihre Lösung betrachtet werden. (Kolmogorov 1932, 59)

    In the case of ex falso, represented as \(\lnot A\Rightarrow(A\Rightarrow B)\), we have as a simple consequence that if we have proved the premiss, then the resulting implication is devoid of content and therefore solved:

    Sobald \(\lnot A\) gelöst ist, [ist] die Lösung von \(A\) unmöglich und die Aufgabe \(A\Rightarrow B\) inhaltslos. (Kolmogorov 1932, 62)

    Thus we do not need a specific construction to prove \(B\) from \(\bot\), for the simple reason that the premiss can never be solved. This view is now the established interpretation (cf. Troelstra and van Dalen 1988, vols. I, 10). Observe that ex falso is similar to Hilbert’s \(τ\) operator: if the most unlikely to be true is actually true, then anything else is true.

    The somewhat non-constructive flavour of ex falso can be gleaned from the intuitionistic law \((\lnot A\vee B)\Rightarrow(A\Rightarrow B)\), whose proof is based on ex falso and a fortiori, as can be seen from the derivation in natural deduction:

    \[\begin{prooftree} \AxiomC{$[\lnot A\vee B]$} \AxiomC{[$\lnot A$]} \AxiomC{[$A$]} \BinaryInfC{$\bot$} \UnaryInfC{$B$} \UnaryInfC{$A\Rightarrow B$} \AxiomC{[$B$]} \UnaryInfC{$A\Rightarrow B$} \TrinaryInfC{$A\Rightarrow B$} \UnaryInfC{$(\lnot A\vee B)\Rightarrow(A\Rightarrow B)$} \end{prooftree}\]

    The law has a non-constructive flavour because it makes the meaning of intuitionistic implication dangerously close to that of classical logic: a sufficient condition for an implication is that either the antecedent is false, or the consequent is true. Heyting, Kolmogorov, and the standard theory after them, all assume that \(\bot\) can never be proved, and therefore no specific construction is needed to obtain an arbitrary proposition from it. Johansson, on the other hand, points out that we may not know whether a proof of \(\bot\) could be obtained, hence in general we could obtain one, if our axioms are inconsistent (cf. Johansson 1937, 128), and then we would have the burden of providing a specific construction to prove any \(B\) from it. Heyting was aware of the somewhat problematic status of ex falso. In later years, referring to this law, he stressed the open-ended character of intuitionistic mathematics:

    It must be remembered that no formal system can be proved to represent adequately an intuitionistic theory. There always remains a residue of ambiguity in the interpretation of the signs, and it can never be proved with mathematical rigour that the system of axioms really embraces every valid method of proof.(Heyting 1956, 102)

    However, if the previous reasoning is correct, things are much worse: it would mean that as long as a contradiction cannot be proved, we are safe with the standard justification of ex falso (i.e., no specific construction is needed), whereas if there is a proof of a contradiction, then we are in a situation in which we have to exhibit a specific transformation that from a contradiction proves any proposition, which we may not be able to produce. It is no escape to say that we do have the inference rule, because that is precisely what we have to justify semantically. Hence it appears that, if a contradiction is produced, ex falso may well cease to be valid. We might call this the paradox of absurdity. Having ex falso yields an elegant mathematical symmetry between truth and falsity, since both \(\bot\Rightarrow A\) and \(A\Rightarrow\top\) are valid for any \(A\). But there is a price to pay in terms of conceptual justification. This appears to be an important open question for constructive semantics, but we will not discuss it further in this paper.

    The role of quantification in constructive logic remains to be considered. We have seen that the original formalization of intuitionistic logic turned upon propositional operators, in order to work out the rules for negation. What is then the role of quantification? It turns out that the crucial issue is with the existential quantifier. We have seen how Weyl rejected an elimination rule for the existential quantifier, because that would break the principle of actuality of proofs. In Hilbert’s terms, it would introduce ideal elements. In keeping with this idea, in a classical system, one can add a rule of existential instantiation based on Hilbert’s \(\varepsilon\) operator:

    \[\begin{prooftree} \AxiomC{$\exists xA(x)$} \UnaryInfC{$A(\varepsilon_{x}A(x))$.} \end{prooftree}\]

    Gentzen (1935), on the other hand, formulated the elimination rule for \(\exists\) as

    \[\begin{prooftree} \alwaysNoLine \AxiomC{$\Pi_{1}$} \UnaryInfC{$\exists xA(x)$} \AxiomC{[$A(x/a)$]} \UnaryInfC{$\Pi_{2}$} \alwaysNoLine \UnaryInfC{$C$} \alwaysSingleLine \BinaryInfC{$C$} \end{prooftree}\]

    (where \(a\) does not occur in \(C\)). Gentzen’s rule is intuitionistically valid, whereas existential instantiation is not, as the following example demonstrates:

    \[\begin{prooftree} \AxiomC{$[A]$} \AxiomC{$A\Rightarrow\exists xB(x)$} \BinaryInfC{$\exists xB(x)$} \UnaryInfC{$B(\varepsilon_{x}B(x))$} \UnaryInfC{$A\Rightarrow B(\varepsilon_{x}B(x))$} \UnaryInfC{$\exists x(A\Rightarrow B(x))$} \end{prooftree}\]

    Since the only laws that we are using are those for implication and the existential quantifier, if one accepts the constructive meaning of the implication rules, it follows that the problem is due to existential instantiation. This shows that Weyl’s misgivings about drawing consequences from existentiallly quantified statements were not unfounded. It is now well-known that if one adds an extensionality condition for the \(\varepsilon\) operator: \[\forall x(A(x)\Leftrightarrow B(x))\Rightarrow\varepsilon_{x}A(x)=\varepsilon_{x}B(x),\] even the Law of Excluded Middle can be derived (this was first proved by Diaconescu in the context of topos theory). Thus Weyl’s use of choice functions defined on objects in a suitable domain, as in \(\exists f\forall xA(x,f(x))\), cannot be extended to choice functions defined on the property \(A(x)\) itself, as in \(\varepsilon_{x}A(x)\), without overstepping the bounds of constructivism, as Weyl rightly saw.

    The above example also shows that the quantifier rules are crucial in characterising constructivism.7

    4 Gödel’s Functional Interpretation

    There are striking similarities between Weyl’s analysis of constructivity and Gödel’s critique of intuitionistic logic in the 1930s. Whether Gödel was acquainted with Weyl’s papers, however, is not clear, at least to this author. Whereas Weyl wrote on these topics before the formulation of the BHK, Gödel had that and the formalization of intuitionistic logic before his eyes. Here we focus on Gödel’s early discussions rather than the published version of his system (Gödel 1958). In Gödel (1933, 51–53) and Gödel (1938, 90), he states the following principles.

    1. Finite Generation The universal quantifier can only be applied to totalities whose elements are finitely generated (e.g., natural or rational numbers).

    2. Existential Dispensability Existential statements are mere abbreviations of statements including a witness of the proved property, otherwise they are dispensable. Therefore the existential quantifier should not be a primitive symbol.

    3. Quantifier Scope Negation must not be applied to universal statements, because that would require a dependency on existential statements, which are dispensable. The only admissible meaning of universal negation is the availability of a counterexample. Gödel (1938, 90) extends the negation restriction to all propositional connectives.

    4. Constrained Inference Existential statements are only governed by the introduction rule. Universal statements cannot be proved by logical means.8

    5. Decidability Only decidable relations and computable functions are allowed constructively.

    The principles of existential dispensability, quantifier scope and constrained inference are obviously very close to Weyl’s analysis. There is no counterpart of the principle of finite generation in Weyl, whereas the principle of decidability restricts the principle of propositional innocence: we can only apply classical inferences because we are using decidable predicates. From Gödel’s principle of quantifier scope it also follows that the definition of negation as reductio ad absurdum is not generally admissible, because it allows us to deny a universal statement even in the absence of a counterexample. Furthermore, from the principle of finite generation it follows that the BHK definition of implication, and hence also of negation, is not admissible because it quantifies over all proofs of the antecedent, and constructive proofs are not a well-defined domain of quantification in the sense of being finitely generated (cf. Gödel 1933, 52–53).

    The remark on reductio ad absurdum is deeply ingrained in Gödel’s analysis of intuitionism. Gödel (1933) extending a result of Glivenko, that classical arithmetic can be embedded, through a suitable translation, into Heyting arithmetic, thereby showing that intuitionistic arithmetic, contrary to expectations, is more general than classical arithmetic. Gödel explains this result thus:

    Der Grund dafür liegt darin, daß das intuitionistische Verbot, Allsätze zu negieren und reine Existentialsätze auszusprechen, in seiner Wirkung dadurch wieder aufgehoben wird, daß das Prädikat der Absurdität auf Allsätze angewendet werden kann, was zu formal den gleichen Sätzen führt, wie sie in der klassischen Mathematik behauptet werden. Wirkliche Einschränkungen scheint der Intuitionismus erst für die Analysis und Mengenlehre zu bringen, doch sind diese nicht durch Ablehnung des Tertium non datur, sondern der imprädikativen Begriffsbildungen bedingt. (Gödel 1932, 294)

    The same argument is reiterated in Gödel (1941, 190). Gödel (1941) builds upon the above principles to provide a positive account of the strengthened constructivism that he envisioned and that was not satisfied by Heyting’s theory. The official formulation of this account was to be Gödel (1958), where Gödel dropped the foundational discussion of intuitionism and focused on the proof of relative consistency.

    Gödel’s approach consists in defining a system \(\mathbf{T}\) extending recursive number theory by admitting computable functionals of finite type, i.e., typed functionals such as \[F^{\sigma\to\tau}(f^{\sigma})=g^{\tau}.\]

    On the logical side, the first consequence of the principle of indispensability and quantifier scope is that sentences can only be in prenex form and with universal quantifiers only (cf. Gödel 1941, 192). Existential quantifiers are accepted as abbreviations, only governed by the introduction rule and therefore eliminable:

    An existential assertion can only appear as the last formula of a proof and the last but one formula of the proof must give the corresponding construction. (Gödel 1941, 193)

    Gödel remarks that this is not an explicit definition of the existential quantifier, but

    a definition of use, which states how propositions containing the new symbol are to be handled in proofs, i.e., from which premises they can be inferred, namely these [premises of the introduction rule], and what can be inferred from them, namely nothing. Now such an implicit definition must satisfy the requirement of eliminability. To be more exact: If a proposition not containing the new symbol can be proved with the help of of the new symbol, it must be demonstrable without the help of the new symbol (otherwise we would not have to do with a definition but with a new axiom). But this requirement is trivially satisfied by this manner of introducing the existential quantifier. (Gödel 1941, 193)

    Apart from the total overlap with Weyl’s conception of the existential quantifier, we can observe that Gödel is stressing an important point here, that is, a definition of use based on the introduction rule alone makes the defined operator trivially eliminable.

    From the previous discussion it follows that the general form of a statement is \(\exists\mathbf{x}\forall\mathbf{y}A(\mathbf{x},\mathbf{y})\), with \(A(\mathbf{x},\mathbf{y})\) quantifier-free and where \(\mathbf{x}\) and \(\mathbf{y}\) are sequences of individual or functional variables. The situation is strongly reminiscent of Weyl’s principle of quantifier nesting. As for Weyl, the strategy for obtaining sentences in the desired form consists in the use of choice functions. For example, \(\forall zA(z)\), with \(A(z)\equiv \exists x\forall yB(x,y,z)\), is interpreted in \(\mathbf{T}\) as \(\exists f\forall z\forall yB(f(z),y,z)\). Implication of \(\mathbf{T}\)-formulas \(\exists x\forall yA(x,y)\) and \(\exists u\forall vB(u,v)\) is not a formula of \(\mathbf{T}\):

    \[\exists x\forall yA(x,y)\Rightarrow\exists u\forall vB(u,v),\qquad{(1)}\]

    but, according to Gödel’s analysis, it can be so transformed by first observing that given an \(x\) as in the antecedent, a \(u\) as in the consequent can be constructed, and such correlation should be given by a computable function \(f\):

    \[\exists f\forall x(\forall yA(x,y)\Rightarrow\forall vB(f(x),v)).\qquad{(2)}\]

    Furthermore, the implication within brackets can be interpreted as saying that a counterexample of the consequent implies a counterexample of the antecedent, which, by functional dependence, becomes:

    \[\exists g\forall v(\lnot B(f(x),v)\Rightarrow\lnot A(x,g(v)))\qquad{(3)}\]

    Now the internal implication is decidable because it is quantifier-free and all relations are decidable, hence we can apply the classical contrapositive to obtain

    \[\exists g\forall v(A(x,g(v))\Rightarrow B(f(x),v))\qquad{(4)}\]

    and by reintroducing the external quantifiers

    \[\exists f\forall x\exists g\forall v(A(x,g(v))\Rightarrow B(f(x),v))\qquad{(5)}\]

    we only need to apply the axiom of choice again to obtain the final form:

    \[\exists f\exists g\forall x\forall v(A(x,g(x,v))\Rightarrow B(f(x),v)).\qquad{(6)}\]

    This rather laborious process of application of choice functions, when extended to all operators,9 allows Gödel to prove his fundamental result: if a sentence is provable in Heyting arithmetic, then it is provable in \(\mathbf{T}\). With this result, Gödel is able to conclude that the sense in which intuitionistic logic, as applied to number theory, is constructive, consists in the fact that any provable existential statement of intuitionistic number theory is translatable into a provable existential statement of \(\mathbf{T}\) for which, by construction, a witness \(t\) is readily available (cf. Gödel 1941, 199). Gödel was confident that his approach could be extended to other branches of constructive mathematics:

    If you apply intuitionistic logic in any branch of mathematics you can reduce it to a finitistic system of this kind under the sole hypothesis that the primitive functions and primitive relations of this branch of mathematics are calculable, respectively, decidable. […] This finitistic system […] is always obtained by introducing functions of higher types analogous to these, with the only difference that the individuals upon which the hierarchy of functions is built up are no longer the integers but the primitive objects of the branch of mathematics under consideration. (Gödel 1941, 195–196)

    Summarizing, Gödel saw that Heyting’s formalization of intuitionistic logic and mathematics contained some prima facie non-constructive methods of proof, not unlike those that we identified in the previous section, giving the possibility of proving an existential statement without a constructed witness, e.g., by applying the elimination rule for the existential quantifier, or the reductio ad absurdum. The significance of his result, as Gödel himself remarked, is that at least as far as number theory is concerned, intuitionistic logic is constructively sound, because a witness can always be recovered. Gödel’s system \(\mathbf{T}\) could perhaps be viewed as a particular implementation of Weyl’s analysis of constructivity, which is not to say, of course, that Weyl would have agreed with the details of Gödel’s approach.

    5 Conclusions

    One crucial problem for constructivism consists in being able to provide a witness for the proof of an existential statement. Weyl and Gödel intended to address this problem by curtailing the deductive strength of the existential quantifier, and more specifically, by forsaking the elimination rule. This is because statements derivable by the elimination rule deviate from the requirement of actuality of constructions, being mere possibilities and thereby introducing a potentially non-constructive (“ideal”) element in inference.

    In our discussion, we have compared three main positions:

    1. Intuitionism of Brouwer and Heyting: a constructive proof is not constrained by the availability of actual witnesses, for it suffices to be able in principle to compute them. The proof-theoretic systems introduced by Gentzen follow this paradigm, in which all of deductive rules corresponding to Heyting’s logic are admissible, because witnesses can be obtained by cut elimination or normalization.

    2. Hilbert’s finitist standpoint: ideal elements, including those posited by classical mathematics, are harmless as long as they can be justified by a finitary consistency proof. This can be understood at least partly as an attempt to meet the challenge of intuitionism without rejecting classical logic.

    3. Strict constructivism as defined by Weyl and Gödel: a truly constructive proof cannot include any ideal elements, and the notion of proof itself should follow the same standards. Prima facie, similar restrictions have a potential for reducing the deductive power of constructive theories, but in fact, Gödel’s approach is only partially revisionistic: his view is that while his variety of constructivism can be more restrictive in general, when confined to a theory built along the lines of \(\mathbf{T}\) for arithmetic (i.e., based exclusively on computable functions and decidable predicates), the full power of Heyting’s logic can be recovered (see Gödel 1941, 195–196).

    There is now one looming question: when it comes to ideal elements, how strict can constructivism be? In particular, can Gödel claim to have succeeded in providing a firmer conceptual foundation for constructivism?

    One crucial problem is whether Gödel’s computable functionals can really be conceptualized without breaking the principle of quantifier scope, as formulated by Weyl and himself,10 since functionals, like any function, require a \(\forall\exists\) condition: \(f\) is a function such that for each argument, it computes a value, or \(\forall x\exists y(f(x)=y)\). But if we transform that into its Skolem form, the result is not quite explanatory: \(\exists g\forall x(f(x)=g(x))\). That is, \(f\) is a function that behaves exactly like some other function \(g\). One can perhaps say that there is no need for such an explanation within \(\mathbf{T}\), but only for our understanding of \(\mathbf{T}\) in the metatheory. This, however, would make Gödel’s conceptual reform of constructivism much less convincing, for it would rely on an implicit grasp of \(\mathbf{T}\) which does not follow \(\mathbf{T}\)’s own principles.

    More generally, are higher-order concepts such as functionals to be classified as ideal elements? A related objection has been leveled by Tait (2006). According to Tait, Gödel’s replacement of proofs by computable functionals is unwarranted, on the grounds that determining that a functional is computable may involve resources of arbitrary complexity, and in general, all of Heyting arithmetic. The rationale behind Tait’s main argument is that constructivity should be defined in terms of methods of reasoning, rather than on the assumption of computability and decidability. Gödel’s later view appears to have been that ultimately, both the finitist standpoint and constructive logic are forced to include ideal elements and drop the assumption that proof constructions must be intuitively given spatiotemporal arrangements (see Gödel 1958, 244).

    In sum, the question of the conceptual semantics of constructivism remains open, as the discussion of ex falso also illustrates. Standard constructivism attempts to strike a balance between the ideality of classical logic and the finitist quest for actuality, and while a fully satisfactory balance seems to be hard to attain, the foundational research conducted along the way has provided a rich account of the logical phenomena involved.

    References

    Brouwer, Luitzen Egbertus Jan. 2020. Intuitionismus. 2nd ed. Mathematik im Kontext. Berlin: Springer Verlag. Edited by Dirk van Dalen and David E. Rowe.
    DeVidi, David and Kenyon, Tim, eds. 2006. A Logical Approach to Philosophy. Essays in Honour of Graham Solomon. The University of Western Ontario Series in Philosophy of Science n. 69. Dordrecht: Springer Verlag.
    Gentzen, Gerhard. 1935. Untersuchungen über das logische Schliessen.” Mathematische Zeitschrift 39: 176–210, 405–431. Republished as Gentzen (1969), doi:10.1007/BF01201353.
    —. 1969. Untersuchungen über das logische Schliessen. Darmstadt: Wissenschaftliche Buchgesellschaft.
    Glivenko, Vladimir I. 1929. Sur quelques points de la logique de M. Brouwer.” Académie Royale de Belgique, Bulletin de la Classe des sciences 5(15): 183–188.
    Gödel, Kurt. 1932. Zur intuitionistischen Arithmetik und Zahlentheorie.” Ergebnisse eines mathematischen Kolloquiums 4: 34–38. Reprinted and translated in Gödel (1986, 286–295), reprinted, with the original page numbers, in Menger (1998, 216–220).
    —. 1933. The Present Situation in the Foundations of Mathematics.” in Collected Works. Volume III: Unpublished Essays and Lectures, pp. 45–53. Oxford: Oxford University Press. Edited by Solomon Feferman, John W. Dawson, Jr., Warren Goldfarb, Charles Parsons and Robert M. Solovay.
    —. 1938. Vortrag bei Zilsel.” in Collected Works. Volume III: Unpublished Essays and Lectures, pp. 86–113. Oxford: Oxford University Press. Translation by Charles Parsons.
    —. 1941. In What Sense is Intuitionistic Logic Constructive? in Collected Works. Volume III: Unpublished Essays and Lectures, pp. 189–201. Oxford: Oxford University Press. Edited by Solomon Feferman, John W. Dawson, Jr., Warren Goldfarb, Charles Parsons and Robert M. Solovay.
    —. 1958. Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes.” Dialectica 12(47–48): 280–287. English translation published as Gödel (1980); reprinted in Gödel (1990, 241–252), doi:10.1111/j.1746-8361.1958.tb01464.x.
    —. 1980. On a Hitherto Unexploited Extension of the Finitary Standpoint.” The Journal of Philosophical Logic 9(2): 133–142. Reprinted in Gödel (1990, 271–280), doi:10.1007/bf00247744.
    —. 1986. Collected Works. Volume I: Publications 1929–1936. Oxford: Oxford University Press. Edited by Solomon Feferman, John W. Dawson, Jr., Stephen C. Kleene, Gregory H. Moore, Robert M. Solovay and Jean van Heijenoort.
    —. 1990. Collected Works. Volume II: Publications 1938–1974. Oxford: Oxford University Press. Edited by Solomon Feferman, John W. Dawson, Jr., Stephen C. Kleene, Gregory H. Moore, Robert M. Solovay and Jean van Heijenoort.
    Heyting, Arend. 1930. Die formalen Regeln der intuitionistischen Logik I.” Sitzungsberichte der Preussischen Akademie der Wissenschaften 2: 42–56. English translation in Mancosu (1998, 311–327).
    —. 1956. Intuitionism: An Introduction. Studies in Logic and the Foundations of Mathematics n. 17. Amsterdam: North-Holland Publishing Co. Second edition: Heyting (1966).
    —. 1966. Intuitionism: An Introduction. 2nd ed. Studies in Logic and the Foundations of Mathematics n. 41. Amsterdam: North-Holland Publishing Co. First edition: Heyting (1956).
    Hilbert, David. 1922. Die logischen Grundlagen der Mathematik.” Mathematische Annalen 88(1–2): 151–165, doi:10.1007/BF01448445.
    Hilbert, David and Bernays, Paul. 1934. Die Grundlagen der Mathematik, Band I. Berlin: Springer Verlag.
    —. 1939. Die Grundlagen der Mathematik, Band II. Berlin: Springer Verlag, doi:10.1007/978-3-642-86896-2.
    Husserl, Edmund. 1939. Erfahrung und Urteil, Untersuchungen zur Genealogie der Logik. Praha: Academia.
    Johansson, Ingebrigt. 1937. Der Minimalkalkül, ein reduzierter intuitionistischer Formalismus.” Compositio Mathematica 4: 119–136, http://eudml.org/doc/88648.
    Kolmogorov, Andrei Nikolaevich. 1925. O principe tertium non datur.” Matematiceskoj Sbornik 32(4): 646–667. Translated by Jan van Heijenoort as “On the Principle of Excluded Middle” in van Heijenoort (1967, 416–437).
    —. 1932. Zur Deutung der intuitionistischen Logik.” Mathematische Zeitschrift 35: 58–65, doi:10.1007/BF01186549.
    Mancosu, Paolo, ed. 1998. From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920s. Oxford: Oxford University Press.
    Menger, Karl, ed. 1998. Ergebnisse eines mathematischen Kolloquiums. Wien: Springer Verlag. Herausgegeben von E. Dierker, K. Sigmund, mit Beiträgen von J.W. Dawson Jr., R. Engelking und W. Hildenbrand, Geleitwort von G. Debreu, Nachwort von F. Alt.
    Tait, William Walker. 2006. Gödel’s Interpretation of Intuitionism.” Philosophia Mathematica 14(2): 208–228, doi:10.1093/philmat/nkj004.
    Troelstra, Anne Sjerp and van Dalen, Dirk. 1988. Constructivism in Mathematics. An Introduction. Volume I. Studies in Logic and the Foundations of Mathematics n. 121. Amsterdam: North-Holland Publishing Co.
    van Heijenoort, Jan, ed. 1967. From Frege to Gödel: A Source Book in Mathematical Logic 1879-1931. Cambridge, Massachusetts: Harvard University Press.
    Weyl, Hermann. 1918. Das Kontinuum. Leipzig: Veit.
    —. 1921. Über die neue Grundlagenkrise der Mathematik.” Mathematische Zeitschrift 10: 39–79, doi:10.1007/BF02102305.
    —. 2009. The Philosophy of Mathematics and Natural Science. Princeton, New Jersey: Princeton University Press. Translated by Olaf Helmer, with a new intrduction by Frank Wilczek, doi:10.1515/9781400833337.