Beta version. This is a page made for testing purposes. For the official version of the article, please visit to OJS webpage at https://dialectica.philosophie.ch/.
We put forth an analysis of actual causation. The analysis
centers on the notion of a causal model that provides only partial
information as to which events occur. The basic idea is this: c causes e only if there is a causal
model that is uninformative on e and in which e will occur if c does. We show that our
analysis captures more causal scenarios than any account that tests
for counterfactual dependence under certain contingencies.
We analyse causation between token events. Here is the gist of the
analysis: an event is a cause of
another event only if both events
occur, and—after taking out the information whether or not occurs— will occur if does. We will show that the analysis
successfully captures a wide range of causal scenarios, including
overdetermination, preemption, switches, and scenarios of double
prevention. This set of scenarios troubles counterfactual accounts of
actual causation. Even sophisticated counterfactual accounts still fail
to deal with all of its members. And they fail for a principled reason:
to solve overdetermination and preemption, they rely on a strategy which
gives the wrong results for switches and a scenario of double
prevention. Our analysis, by contrast, is not susceptible to this
principled problem.
Counterfactual accounts try to analyse actual causation in terms of
counterfactual dependence. An event counterfactually depends on an event
if and only if (iff), were not to occur, would not occur. Among the accounts in
the tradition of Lewis (1973), counterfactual dependence between
two occurring events is taken to be sufficient for causation.1 That is, an occurring event is a cause of a distinct occurring
event if, were not to occur, would not occur. Counterfactual
accounts thus ask, “What would happen if the putative cause were
absent?” Under this counterfactual assumption, they claim causation if
the presumed effect is absent as well.
Overdetermination is troublesome for counterfactual accounts.
Consider the scenario depicted in figure 1.
Figure 1: CAPTION NEEDED
Neuron and neuron fire. The firing of each of and alone suffices to excite neuron . Hence, the common firing of and overdetermines to fire. Arguably, the firing of is a cause of ’s excitation, and so is the firing of
.
What would have happened had
not fired? If had not fired,
would have been excited anyways.
After all, would still have
fired. Hence, as is well known,
is not a cause of on Lewis’s
(1973)
account. More sophisticated accounts solve the scenario of
overdetermination as follows: ’s
excitation is a cause of ’s firing
because ’s firing counterfactually
depends on ’s excitation if were not to fire. The non-actual
contingency that does not fire
reveals a hidden counterfactual dependence of the effect on its cause . The general strategy is to test for
counterfactual dependence under certain contingencies, be they actual or
non-actual. We call counterfactual accounts relying on this strategy
‘sophisticated’.2
Numerous sophisticated accounts analyse causation relative to a
causal model. A causal model represents a causal scenario by specifying
which events occur and how certain events depend on others. Formally, a
causal model
is given by a variable assignment
and a set of structural
equations. For the above scenario of overdetermination, may be given by the set , which says that all
neurons fire. is given by , which says that fires iff or does. In this causal model, we may set
the variable to , to and propagate forward the changes effected by these
interventions. Given that
and , the structural equation
determines that . The
equation tells us that would not
have fired, if had not fired
under the contingency that had
not fired. Hence, the above solution of overdetermination can be
adopted: is a cause of (relative to the causal model) because
counterfactually depends on if is set by intervention.3
We solve the problem of overdetermination in a different way. The
idea is this: remove enough information about which events occur so that
there is no information on whether or not a putative effect occurs; an
event is then a cause of this
effect only if—after the removal of information—the effect will occur if
does.
We use causal models to implement the idea. The result of the
information removal is given by a causal model that provides
only partial information as to which events occur, but complete
information about the dependences between the events. To outline the
preliminary analysis: is a cause
of relative to a causal model
iff
and are true in , and
there is such
that
contains no information as to whether is true, but in which will become true if does.
By these conditions, we test whether an event brings about another
event in a causal scenario. Causation is here actual production.
Why is ’s excitation a cause of
’s firing in the overdetermination
scenario? Take the causal model that contains no information about whether or
not the effect occurs: Here, a neuron is dotted iff contains no information as to
whether the neuron fires or not. Since all neurons are dotted, the
causal model contains no information on which neurons fire. But it still
contains all the information about dependences among the neurons, as
encoded by the structural equation of the overdetermination scenario.
Let us now intervene such that
becomes excited: The structural equation is triggered and
determines to fire. Hence, ’s excitation is a cause of ’s firing on our analysis. The
overdetermination scenario is solved without counterfactually assuming
the absence of the cause and without invoking any contingency.
It should be noted that the recent counterfactual theories of Gallow (2021) and
Andreas and
Günther (2021a) are not sophisticated in our
sense: they do not test for counterfactual dependence under certain
contingencies. And so they are not susceptible to the principled
problem. Indeed, both theories solve the set of scenarios that troubles
sophisticated accounts. The analysis of Andreas and Günther (2021a) relies on a removal of
information just like the analysis proposed here, and can thus be seen
as its counterfactual counterpart. We will briefly and favourably
compare our analysis to its counterfactual counterpart in the
Conclusion.
In what follows, we refine our analysis, apply it to causal
scenarios, and compare it to counterfactual accounts. In section 1, we introduce our account of causal models. In
section 2, we state a preliminary version of our
analysis and explain its rationale. We apply this analysis to various
causal scenarios in section 3. In response to
certain switching scenarios, we amend our preliminary analysis by a
condition of weak difference making. In section 4, we state the final version of our analysis. In
section 5, we compare our analysis to the extant
counterfactual accounts. Section 6 concludes the
paper.
1 Causal Models
In this section, we explain the basic concepts of causal models. Our
account parallels the account of causal models in Halpern (2000).
Unlike Halpern, we introduce structural equations as formulas and not as
functions. Another difference is that our account is confined to binary
variables, the values of which are represented by literals.4 We will see shortly that these
modelling choices allow us to define causal models in a straightforward
way, in particular causal models that carry only partial information as
to which events occur. In the appendix, we supplement the explanations
of the core concepts of causal models with precise definitions.
Our causal models have two components: a set of structural equations and a
consistent set of literals. Where
is a propositional variable,
is a positive literal and a negative literal. We give
literals a semantic role. The literals in denote which events occur and which do
not, that is, which events and absences are actual. means that the event
corresponding to occurs. , by contrast, means that no
token event of the relevant type
occurs. Since the set of literals is consistent, it cannot be that both
and are in . Arguably, an event cannot both occur
and not occur at the same time.
A structural equation denotes whether an event would occur if some
other events were or were not to occur. Where is a propositional variable and a propositional formula, we say that
is a structural equation.
Each logical symbol of is
either a negation, a disjunction, or a conjunction. can be seen as a truth function
whose arguments represent occurrences and non-occurrences of events. The
truth value of determines
whether or .
Consider the scenario of overdetermination depicted in figure 1. There are arrows from the neurons and to the neuron . The arrows represent that the
propositional variable is
determined by the propositional variables and . The specific structural equation of
the overdetermination scenario is . This equation says that occurs iff or does. A set of structural equations
describes dependences between actual and possible token events.
For readability, we will represent causal models in two-layered
boxes. The causal model of the overdetermination scenario, for example,
is given by . We will depict such causal models in a box, where the
upper layer shows the set of
structural equations and the lower layer the set of actual literals. For the
overdetermination scenario, we obtain: We say that a set of literals satisfies a structural
equation just in case both
sides of the equation have the same truth value when plugging in the
literals in . In the case of
overdetermination, the actual set of literals satisfies the structural
equation. By contrast, the set of literals does not satisfy . When plugging in the
literals, the truth values of and
do not match. We say that
a set of literals satisfies a set
iff satisfies each member of .
The structural equations and the literals determine which events
occur and which do not occur in a causal model. This determination can
be expressed by a relation of satisfaction between a causal model and a
propositional formula.
Definition 1( satisfies ). satisfies iff
is true in all complete sets
of literals that extend and satisfy . A set of literals is complete iff each
propositional variable (in the language of M) is assigned to a truth
value by .
If is complete, this
definition boils down to: satisfies iff
satisfies , or does not satisfy . Provided is complete, satisfies at least
one of and for any formula .
Our analysis relies on causal models that contain no information as
to whether or not an effect occurs. We say that a causal model is
uninformative about a formula iff satisfies none of
and . Note that cannot be
uninformative on any formula if
is complete.
In the scenario of overdetermination, the causal model is uninformative on
for . There are four complete
extensions that satisfy . One of these is . Hence, does not satisfy . Similarly, does not satisfy
. There is a complete
extension of that satisfies but fails to satisfy . The actual set of literals, for example,
but also the sets and .
The structural equation constrains the overdetermination scenario to
four possible cases. These cases are expressed by the complete sets of
literals which satisfy .
Why is not
uninformative on for ? Well, there is no complete
extension of that satisfies the
structural equation in but fails
to satisfy . There are only two
such complete extensions: and .
If remains in the set of literals, is determined independent of whether or
not occurs.
It remains to introduce interventions. Recall that a structural
equation determines the
truth value of the variable if
certain variables occurring in
are given truth values by the
literals in . To represent an
intervention that sets to one of
the truth values, we replace the equation by the corresponding literal or . We implement such interventions by the notion of a submodel.
is a submodel of relative to a consistent set of literals just in case contains the literals in and the structural equations of for the variables which do not occur in
. In symbols,
We denote interventions by an operator that takes a model and a consistent set of literals , and returns a submodel. In symbols,
. In the
overdetermination scenario, for instance, we may intervene on by . This yields: . The causal model satisfies , and satisfies .
If were actual under the
contingency that , would be actual.
Finally, note that the above definition of satisfaction applies to
causal models and causal submodels. The definition does not only capture
the relation of a causal model satisfying a formula , but also the relation of a causal
submodel
satisfying such a formula. This is explained further in the
appendix.
2 The Analysis
We are now in a position to spell out our analysis in a more precise
way. The key idea is as follows: for to be a cause of , there must be a causal model that is
uninformative about , while
intervening by determines to be true. The latter condition must
be preserved under all interventions by a set of actual events. In more formal
terms:
Definition 2(Actual Cause, preliminary). Let be a causal model such that satisfies .
is an actual cause of relative to
iff
(C1) satisfies and , and
(C2) there is such that is
uninformative on , while for all
,
satisfies .
The rationale behind our analysis is straightforward: there must be a
way in which a genuine cause actually brings about its effect. This
production of the effect can be reconstructed by means of a causal model
that
contains some information of the original causal model , but no information
about whether the effect is actual. Or so requires condition (C2).
Furthermore, (C2) says production of an effect
must respect actuality. The idea is that the causal process initiated by
a genuine cause must respect what actually happened. A genuine cause
cannot produce its effect via non-actual events and absences. The
process from cause to effect must come about as it actually happened.
This idea requires that a genuine cause must bring about its effect by
events and absences that are actual. We implemented this requirement as
follows: intervening upon the uninformative model by any subset
of the actual events and absences
must preserve that will become
actual if does. Thereby, it is
ensured that a genuine cause cannot bring about its effect by events or
absences that are not actual. If
is a genuine cause, there can be no subset of the actual literals that interferes with the determination
of by in the respective uninformative model.
We describe this feature of (C2) as
intervention by actuality.
3 Scenarios
In this section, we test our analysis of actual causation against
causal scenarios, and compare the results to the counterfactual accounts
due to Lewis (1973),
Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015).
We follow Paul and
Hall (2013,
10) in laying out the structure of causal scenarios by neuron
diagrams. “Neuron diagrams earn their keep,” they write, “by
representing a complex situation clearly and forcefully, allowing the
reader to take in at a glance its central causal characteristics.”5 We introduce simple neuron diagrams
for which there is always a corresponding causal model. Our causal
models, however, can capture more causal scenarios than simple neuron
diagrams.
A neuron diagram is a graph-like representation that comes with
different types of arrows and different types of nodes. Any node stands
for a neuron, which fires or else does not. The firing of a neuron is
visualized by a gray-shaded node, the non-firing by a white node. For
the scenarios to be considered, we need two types of arrows. Each arrow
with a head represents a stimulatory connection between two neurons,
each arrow ending with a black dot an inhibitory connection.
Furthermore, we distinguish between normal neurons that become
excited if stimulated by another and stubborn neurons whose
excitation requires two stimulations. Normal neurons are visualized by
circles, stubborn neurons by thicker circles. A neuron diagram obeys
four rules. First, the temporal order of events is left to right.
Second, a normal neuron will fire if it is stimulated by at least one
and inhibited by none. Third, a stubborn neuron will fire if it is
stimulated by at least two and inhibited by none. Fourth, a neuron will
not fire if it is inhibited by at least one.
Typically, neuron diagrams are used to represent events and absences.
The firing of a neuron indicates the occurrence of some event and the
non-firing indicates its non-occurrence. Recall that we analyse
causation between token events relative to a causal model , where the causal
model represents the causal scenario under consideration. We thus need a
correspondence between neuron diagrams and causal models.
Here is a recipe to translate an arbitrary neuron diagram, as
detailed here, into a causal model. Given a neuron diagram, the
corresponding causal model can be constructed in a step-wise fashion.
For each neuron of the neuron
diagram:
Assign a propositional
variable .
If fires, add the positive
literal to the set of literals.
If does not fire, add the
negative literal to .
If has an incoming arrow,
write on the right-hand side of ’s
structural equation a propositional formula such that is true iff fires.6
This recipe adds a positive literal to the set of literals for each neuron that fires,
and a negative literal for
each neuron that dos not fire. Then the neuron rules are translated into
structural equations. One can thus read off a neuron diagram its
corresponding causal model: if a neuron is shaded gray, is in the set of literals of the corresponding causal
model; if a neuron is white,
is in .
We have already added a feature to neuron diagrams in the
introduction. Recall that dotted nodes represent neurons about which
there is no information as to whether or not they fire. In more formal
terms, if and , the corresponding
neuron will be dotted. We portray now how our analysis solves the
problems posed by overdetermination, conjunctive causes, early and late
preemption, switches, prevention, and two scenarios of double
prevention.
3.1 Overdetermination
Scenarios of overdetermination are commonly represented by the neuron
diagram depicted in figure 1. Here is a story
that fits the structure of overdetermination: A prisoner is shot by two
soldiers at the same time ( and
), and each of the bullets is
fatal without any temporal precedence. Arguably, both shots should
qualify as causes of the death of the prisoner ().
Our recipe translates the neuron diagram of figure 1 into the following causal model : Relative to , is a cause of . For this to be seen, consider the
following causal model that is uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies . And intervening by any
subset of actual events does not undo the determination.7 In
more detail, any intervention by a subset of yields a causal model that
determines to be true. Due to the
symmetry of the scenario, is a
cause of .8
Overdetermination is trouble for the counterfactual account of Lewis (1973).
There, Lewis defines actual causation as the transitive closure of
counterfactual dependence between occurring events. Let and be distinct events. is a cause of iff and occur, and there is a sequence of
distinct events and absences such that each element in the sequence
(except the first) counterfactually depends on its predecessor in a
non-backtracking way.9 Recall that counterfactually depends on just in case if were not to occur, would not occur. Lewis insists that
each counterfactual in the series of counterfactual dependences is
non-backtracking.10 A backtracking counterfactual
retraces some past causes from an effect: if the effect were not to occur, its past causes
and must have been absent. Intuitively,
this backtracking counterfactual is true in the confines of the
overdetermination scenario. Yet Lewis does not allow such backtracking
counterfactuals to figure in the series of counterfactual
dependences.
It follows from Lewis’s account that non-backtracking counterfactual
dependence between occurring events is sufficient for causation. As soon
as and occur, there is a sequence . If, in addition,
counterfactually depends on in a non-backtracking way, is a cause of . In the scenario of overdetermination,
is not a cause of on this account.11
There is no suitable series of counterfactual dependences. If had not fired, would have been excited all the same.
After all, would still have fired
and excited . Due to the symmetry
of the scenario, is not a cause
of either. But then, what caused
the death of the prisoner? Surely, we do not want to say that the death
is uncaused.
The counterfactual accounts of causation due to Hitchcock (2001)
and Halpern
and Pearl (2005) solve the scenario of
overdetermination as follows: is
a cause of because counterfactually depends on if is set by intervention. Their tests for causation allow for
non-actual contingencies, that is, to set variables to non-actual values
and to keep them fixed at these non-actual values. We will see that this
feature is problematic in switching scenarios and extended double
prevention.
Halpern (2015)
modifies the Halpern and Pearl (2005) definition of actual causation.
The main difference is that the modified definition admits only actual
contingencies for the counterfactual test. Hence, the modified
definition fails to recognize the individual overdeterminers as actual
causes, while it counts the set of overdeterminers to be an actual cause of .12 It has troubles to
handle overdetermination, as already pointed out by Andreas and Günther
(2021b). This indicates that
overdetermination haunts counterfactual accounts to date.
3.2 Conjunctive Causes
In a scenario of conjunctive causes, an effect occurs only if two
causes obtain. The neuron diagram in figure 2
depicts a scenario of conjunctive causes:
Figure 2: CAPTION NEEDED (Conjunctive
causes?)
The neurons and fire. Together they bring the stubborn
neuron to fire. Had one of and not fired, would not have been excited. Hence, the
firing of both neurons is necessary for ’s excitation.
Our recipe translates the neuron diagram of figure 2 into the following causal model : The scenario of conjunctive causes differs
from the scenario of overdetermination only in the structural equation
for . While the structural
equation is disjunctive in the scenario of overdetermination,
here the equation is conjunctive. The occurrence of both
events, and , is necessary for to occur.
Relative to , is a cause of
. For this to be seen, consider
the following causal model that is uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies . Again, due to the
symmetry of the scenario, is a
cause of .13
At first sight, conjunctive causes seem to be no problem for
counterfactual accounts. If had
not fired, would not have fired.
Hence, on the counterfactual accounts, is a cause of . And by the symmetry of the scenario,
is a cause of . However, the accounts due to Lewis (1973) and
Hitchcock (2001) do
not allow sets of events to be causes, unlike the definitions of actual
causation provided by Halpern and Pearl (2005) and Halpern (2015).
Yet the latter definitions still do not count the set containing and as an actual cause of in this scenario of
conjunctive causes. Hence, none of these counterfactual
accounts counts the set containing the two individual causes as a cause
of the effect. This is peculiar for reasons worked out by Andreas and Günther
(2021b).
3.3 Early Preemption
Preemption scenarios are about backup processes: there is an event
that, intuitively, causes . But even if had not occurred, there is a backup
event that would have brought
about . Paul and Hall (2013,
75) take the following neuron diagram as canonical example of
early preemption:
Figure 3: CAPTION NEEDED
(Preemption?)
’s firing excites neuron , which in turn leads to an excitation
of neuron . At the same time,
’s firing inhibits the excitation
of . Had not fired, however, would have excited , which in turn would have led to an
excitation of . The actual cause
preempts the mere potential cause
.14
Our recipe translates the neuron diagram of early preemption into the
following causal model : Relative to , is a cause of . For this to be seen, consider the
following causal model that is uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies .
Relative to , is not a cause
of . The reason is that actuality
intervenes. The causal model is uninformative on only for or . Intervening on by yields a causal model in
which does not produce , independently of the choice of . In more formal terms, does not satisfy . For each choice of , there is a complete extension
that satisfies the structural equations , and but does not satisfy .
This extension of is .
Intuitively, is not a genuine
cause of since would produce only via an event that did not actually occur. Hence,
is not a cause of because does not actually produce
.
Lewis’s (1973)
account solves early preemption. In figure 3, is a
cause of . Both occur, and there
is a sequence such that
counterfactually depends in a non-backtracking way on , and does so on . The counterfactual ‘If had not fired, its cause would have to have not fired’ is
backtracking. Barring backtracking, we do not obtain that would have fired because did not, and thus would not be inhibited. Hence, if had not fired, would still not have fired. And so ‘If
had not fired, would not have fired’ comes out true
under the non-backtracking requirement. is not a cause of . For there is no sequence of events and
absences from to where each counterfactually depends on
its predecessor in a non-backtracking way. If had fired, would still have fired.
The solution to early preemption by Hitchcock (2001)
and Halpern
and Pearl (2005) is analogous to their solution for
overdetermination. is a cause of
because counterfactually depends on under the contingency that . By contrast to their solution for
overdetermination, the contingency is actual in cases of early
preemption. Hence, Halpern’s (2015) account solves early preemption as
well.
3.4 Late Preemption
Lewis (1986b,
200) subdivides preemption into early and late. We have
discussed early preemption in the previous section: a backup process is
cut off before the process started by the preempting cause brings about
the effect. In scenarios of late preemption, by contrast, the backup
process is cut off only because the genuine cause brings about the
effect before the preempted cause could do so. Lewis (2000, 184) provides the following story
for late preemption:
Billy and Suzy throw rocks at a bottle. Suzy throws first, or maybe
she throws harder. Her rock arrives first. The bottle shatters. When
Billy’s rock gets to where the bottle used to be, there is nothing there
but flying shards of glass. Without Suzy’s throw, the impact of Billy’s
rock on the intact bottle would have been one of the final steps in the
causal chain from Billy’s throw to the shattering of the bottle. But,
thanks to Suzy’s preempting throw, that impact never happens.
Crucially, the backup process initiated by Billy’s throw is cut off
only by Suzy’s rock impacting the bottle. Until her rock impacts the
bottle, there is always a backup process that would bring about the
shattering of the bottle an instant later.15
Halpern and
Pearl (2005, 861–862) propose a causal model
for late preemption, which corresponds to the following neuron
diagram:
Figure 4: CAPTION NEEDED
Suzy throws her rock () and
Billy his (). Suzy’s rock impacts
the bottle (), and so the bottle
shatters (). Suzy’s rock impacting
the bottle () prevents Billy’s
rock from impacting the bottle (). (The “inhibitory signal” from takes “no time” to arrive at .)
Our recipe translates the neuron diagram of late preemption into the
following causal model : Relative to , is a cause of . For this to be seen, consider the
following causal model that is uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies .
Relative to , is not a cause
of . The intuitive reason is that
Billy’s rock did not actually impact the bottle. The formal reasoning is
perfectly analogous to the one for the scenario of early preemption in
the previous section. Our analysis solves early and late preemption in a
uniform manner.
Lewis’s (1973)
account does not solve late preemption. Suzy’s throw () is not a cause of the bottle
shattering (). There is no
sequence
of events and absences such that each event (except ) counterfactually depends on its
predecessor in a non-backtracking way. There is, of course, the sequence
, and if
Suzy had not thrown (), her
rock would not have impacted the bottle (). However, if Suzy’s rock had not
impacted the bottle (), the
bottle would have shattered anyways (). The reason is that—on a
non-backtracking reading—if Suzy’s rock had not impacted the bottle
(), Billy’s rock would have
(). But if Billy’s rock had
impacted the bottle (), it would
have shattered (). By contrast to
scenarios of early preemption, there is no chain of stepwise dependences
that run from cause to effect: there is no sequence of non-backtracking
counterfactual dependences that links Suzy’s throw and the bottle’s
shattering.16
The counterfactual accounts of causation due to Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015)
solve the scenario of late preemption analogous to early preemption.
is a cause of because counterfactually depends on under the contingency that .
3.5 Simple Switch
In switching scenarios, some event helps to determine the causal path by
which some event is brought about
(Hall 2000,
205). The following neuron diagram represents a simple
version of a switching scenario:
Figure 5: CAPTION NEEDED
(Switching?)
The firing of neuron excites
’s firing, which in turn excites
neuron . At the same time, ’s firing inhibits the excitation of
. The neuron is a little special: it would have been
excited in case had not fired.
determines which one of and is firing, and thus determines the
causal path by which is excited.
We say acts like a switch as to
.
Let us supplement our neuron diagram by a story due to Hall (2007, 28).
Flipper is standing by a switch in the railroad tracks. A train
approaches in the distance. She flips the switch (), so that the train travels down the
right track (), instead of the
left (). Since the tracks
reconverge up ahead, the train arrives at its destination all the same
(). We agree with Hall that
flipping the switch is not a cause of the train’s arrival. The story
assumes that flipping the switch makes no difference to the train’s
arrival: “the train arrives at its destination all the same.” The
flipping merely switches the causal path by which the train arrives.17
Our recipe translates the neuron diagram of the switching scenario
into the following causal model : Relative to , is not a cause of . The reason is that there exists no
causal model uninformative on . Any complete extension of the empty
set of literals that
satisfies the structural equations of contains . In fact, there are only two complete
extensions that satisfy the structural equations, viz. the actual and the non-actual
. The
structural equations in determine
no matter what.18
Our analysis requires for to
be a cause of that there must be
a causal model uninformative about in which brings about . The idea is that, for an event to be
caused, it must arguably be possible that the event does not occur.
However, in the switching scenario, there is no causal model
uninformative on in the first
place. Hence, is not a cause of
in the simple switch.
A simplistic counterfactual analysis says that an event is a cause of a distinct event just in case both events occur, and
would not occur if had not occurred. This suggests that
the switching scenario is no challenge for counterfactual accounts,
because would occur even if had not. And yet it turns out that
cases like the switching scenario continue to be troublesome for
counterfactual accounts.
Recall that Lewis
(1973)
defines actual causation to be the transitive closure of
non-backtracking counterfactual dependence between occurring events. In
the switching scenario, , and
occur, and both counterfactually depends on in a non-backtracking way and does so on . Barring backtracking, if had not fired, would not have fired. By the transitive
closure imposed on the one-step causal dependences, Lewis (1973) is
forced to say that is a cause of
.19
The sufficiency of (non-backtracking) counterfactual dependence for
causation is widely shared among the accounts in the tradition of Lewis,
for instance by Hitchcock (2001),
Woodward (2003),
Hall (2004, 2007), and
Halpern and
Pearl (2005). However, the counterfactual
accounts based on structural equations reject the transitivity of
causation. Still, Hitchcock (2001)
counts to be a cause of . The reason is that there is an active
causal path from over to and keeping the off-path variable fixed at its actual value induces a
counterfactual dependence of on
. Similarly, Halpern and Pearl
(2005) and Halpern (2015)
count to be a cause of , since counterfactually depends on under the actual contingency that . Hence, even the contemporary
counterfactual accounts misclassify to be a cause of .20 Allowing for actual
contingencies solved preemption, but leads to trouble in switching
scenarios. Without allowing for actual contingencies, it is unclear how
the counterfactual accounts solve preemption. It seems as if the
sophisticated counterfactual accounts have no choice here but to take
one hit.
3.6 Realistic Switch
The representation of switching scenarios is controversial. Some
authors criticize the simple switch in figure 5 from the previous section because they believe
that any “real-world” event has more than one causal influence (e.g., Hitchcock 2009,
396). The idea is that the train can only pass on the right
track because nothing blocks the track, it is in good conditions, and so
on. These critics insist on “realistic” scenarios in which there is
always more than just one event that causally affects another. The
simple switch is thus inappropriate because there must be another neuron
whose firing is necessary for the excitation of . Some authors then quickly point out
that the causal model of the resulting switch is indistinguishable from
the one of early preemption (e.g., Beckers and
Vennekens 2018, 848–851). And this is a problem for any
account of causation that only relies on causal models. For should intuitively be a cause of in early preemption, but should not be a cause in a “realistic”
switching scenario.21
It is too quick to point out that switches and early preemption are
structurally indistinguishable. After all, the critics who insist on
“realistic” scenarios are bound to say that there should also be another
neuron whose firing is necessary for the excitation of . This restores the symmetry between
and which seems to be essential to
switching scenarios. The following neuron diagram depicts our realistic
switch:
Figure 6: CAPTION NEEDED (Realistic
switch?)
The joint firing of neurons
and excites ’s firing, which in turn excites neuron
. At the same time, ’s firing inhibits the excitation of
. Had not fired, the firing of would have excited , which in turn would have excited . In the actual circumstances, determines which one of and is firing, and thus acts like a switch
as to .
Our recipe translates the neuron diagram of our realistic switch into
the following causal model : Relative to , is a cause of according to our preliminary analysis.
For this to be seen, consider the following causal model that is
uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies . Our preliminary
analysis wrongly counts the “realistic switch” as a cause of .
It is time to amend our preliminary analysis by a condition of
weak difference making. The idea is this: if some event is a cause of an event , then it is not the case that would be a cause of the same event
. Sartorio (2006, 75) convinces us that this
principle of weak difference making is a condition “the true analysis of
causation (if there is such a thing) would have to meet.”22
But this condition is violated by “realistic switches”: helps to bring about an effect , and so would the non-actual . So a “realistic switch” is not a
cause if we demand of any genuine cause of some effect that would not also bring about . We demand that would not also bring about by the following condition:
(C3) There is no
such that is uninformative on and satisfies .
(C3) demands that there is no causal model
uninformative on in which is actual if is. The condition ensures that a
cause is a difference maker in the weak sense that its presence and its
absence could not bring about the same effect. This implies Sartorio’s
principle of weak difference making: if is a cause of , then would not also be a cause of . And note that our condition of
difference making is weaker than the difference making requirement of
(sophisticated) counterfactual accounts of causation. Unlike them, we do
not require that is actual
under the supposition that
is actual (given certain contingencies).
(C3) ensures that is not a cause of in the realistic switch. For this to be
seen, consider the following causal model that is
uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies . Our preliminary
analysis amended by (C3) says that the “realistic
switch” is not a cause of , as desired.23
We will leave it as an exercise for the reader to check that (C3) does not undo any causes our preliminary
definition identifies in this paper, except for the “realistic
switches.”
Lewis’s (1973)
account misclassifies as a cause
of in our realistic switch. As in
the simple switch, there is a causal chain running from to : the sequence of actual events
such that each event (except )
counterfactually depends on its predecessor in a non-backtracking way.
Similarly, Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015)
all misclassify as a cause of
. The reasons are analogous to the
reasons in the simple switch. Roughly, counterfactually depends on when is fixed at its actual value.
3.7 Prevention
To prepare ourselves for a discussion of double prevention, let us
take a look at simple prevention first. Paul and Hall (2013,
174) represent the basic scenario of prevention by the
following neuron diagram:
Figure 7: CAPTION NEEDED
(Prevention?)
Neuron fires and thereby
inhibits that neuron gets
excited. would have been excited
by if the inhibitory signal from
were absent. But as it is, prevents from firing. That is, causes by prevention.
Our recipe translates the neuron diagram of prevention into the
following causal model : Relative to , is a cause of . For this to be seen, consider the
following causal model that is uninformative on . Intervening by yields: Obviously, this causal model determines to be true. In more formal terms,
satisfies . Moreover, is not a cause of relative to . Any causal model
uninformative on must be
uninformative on as well.
Intervening by in does not
determine .
Counterfactual accounts face no challenge here. If had not fired, would have fired. Counterfactual
dependence between actual events and absences is sufficient for
causation. Hence, is a cause of
. If had not fired, would not have fired, even under the
contingency that did not fire.
Hence, is not a cause of .
3.8 Double Prevention
Double prevention can be characterized as follows. is said to double prevent if prevents an event that, had it
occurred, would have prevented .
In other words, double prevents
if cancels a threat for ’s occurrence. Paul and Hall (2013, 154,
175) represent an example of double prevention by the
following neuron diagram:
Figure 8: CAPTION NEEDED (Double
prevention?)
’s firing prevents ’s firing, which would have prevented
’s firing. The example of double
prevention exhibits a counterfactual dependence: given that fires, ’s firing counterfactually depends on
’s firing. If did not fire, would fire, and thereby prevent from firing. Hence, ’s firing double prevents ’s firing in figure 8. In other words, ’s firing cancels a threat for ’s firing, viz. the threat originating
from ’s firing.
Paul and Hall
(2013) say that is a cause of in the scenario of figure 8. They thereby confirm that there is causation
by double prevention.
counterfactually depends on .
Hence, the accounts of causation due to Lewis (1973, 2000),
Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015)
agree with Paul and Hall in counting a cause of . How does our account fare?
Our recipe translates the neuron diagram of double prevention into
the following causal model : Relative to , is a cause of . For this to be seen, consider the
following causal model that is uninformative on . Intervening by yields: Obviously, this causal model determines and so to be true. In more formal terms,
satisfies .
3.9 Extended Double Prevention
Hall (2004, 247)
presents an extension of the scenario depicted in figure 8. The extended double prevention scenario fits
the structure of the following neuron diagram:
Figure 9: CAPTION NEEDED
Figure 9 extends figure 8 by neuron , which figures as a common cause of
and .
starts a process via that
threatens to prevent . At the same
time, initiates another process
via that prevents the threat.
cancels its own
threat—the threat via —to prevent
. In the example of the previous
section, the threat originated independent of its preventer. Here, by
contrast, creates and cancels the
threat to prevent . This
difference is sufficient for not
to be a cause of , or so argue for
instance Paul and
Hall (2013,
216). Observe that the structure characteristic of double
prevention is embedded in figure 9. The
firing of neuron inhibits ’s firing that, had it fired, would have
inhibited ’s firing. Nevertheless,
this scenario of double prevention exhibits an important difference to
its relative of the previous section: does not counterfactually depend on
. If had not fired, would still have fired.
Hitchcock (2001,
276) provides a story that matches the structure of the
scenario. A hiker is on a beautiful hike (). A boulder is dislodged () and rolls toward the hiker (). The hiker sees the boulder coming and
ducks (), so that he does not get
hit by the boulder (). If the
hiker had not ducked, the boulder would have hit him, in which case the
hiker would not have continued the hike. Since, however, he was clever
enough to duck, the hiker continues the hike ().
Hall (2007, 36)
calls the subgraph a
short circuit with respect to : the boulder threatens to prevent the
continuation of the hike, but provokes an action that prevents this
threat from being effective. Like switching scenarios, the scenario
seems to show that there are cases where causation is not transitive:
the dislodged boulder produces
the ducking of the hiker , which
in turn enables the hiker to continue the hike . But it is counterintuitive to say that
the dislodging of the boulder
causes the continuation of the hike . After all, the dislodgement of the
boulder is similar to a switch as to the hiker not getting hit
by the boulder: helps to bring
about , and if were actual, would also help to bring about
. In this sense, is causally inert.
Our recipe translates the neuron diagram of the boulder scenario into
the following causal model : Relative to , is not a cause of . The reason is that the causal model
is only
uninformative on if is not in . But then does
not satisfy .
In words, the causal model is uninformative about only if is not in the set of literals. But then intervening
with does not make true. After all, is necessary for determining . If we were to keep in the literals, the model would not be
uninformative. There is no complete extension of that satisfies all the
structural equations of but fails
to satisfy .
On Lewis’s (1973)
account, is a cause of . There is a sequence of events
and absences such that each element (except ) counterfactually depends on its
predecessor in a non-backtracking way. The structural equation accounts
of Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015)
classify as a cause of . The reason is that counterfactually depends on under the contingency that .
The situation is bad for the sophisticated counterfactual accounts.
While their general strategy to allow for possibly non-actual
contingencies solves overdetermination and preemption, it is the very
same strategy that is at fault for the unintuitive results in the
switching scenario and extended double prevention. The backfiring of
their general strategy casts doubt on whether it was well motivated in
the first place. If the general strategy is merely motivated by solving
overdetermination, it turns out that overdetermination still haunts the
sophisticated accounts of causation. By contrast to these counterfactual
accounts, our analysis of actual causation solves overdetermination
without further ado. Our analysis has thus a major advantage over the
sophisticated counterfactual accounts.
4 Final Analysis
In section 1, we stated a preliminary version
of our analysis and amended it in section 3.6
by condition (C3). The amended version is still
preliminary because it assumes that both the cause and the effect are
single events. This assumption is violated in certain causal scenarios.
Recall, for instance, the scenario of conjunctive causes from section 3.2. There, two events are necessary for an
effect to occur, and so the set containing the two events should count
as a cause of said effect. To give an example, lightning resulted in a
forest fire only because of a preceding drought. Here, it seems
plausible that lightning together with the preceding drought is an—if
not the—cause of the forest fire.24
We lift the restriction of cause and effect to single literals as
follows. A cause is a set of literals , an effect an arbitrary Boolean
formula. Where is a set of
literals, stands for
the conjunction of all literals in and for the negation of all literals in . Our final analysis of actual causation
can now be stated.
Definition 3(Actual Cause). Let be a causal model such that satisfies .
is a set of literals and a formula. is an actual cause of relative to iff
(C1*) satisfies , and
(C2*) there is such that is
uninformative on , while
for all and all
non-empty ,
satisfies ; and
(C3*) there is no such
that
is uninformative on and
satisfies .
In this more general analysis, clause (C2*)
contains a minimality condition ensuring that any cause contains only
causally relevant literals. For this to be seen, suppose there is a set
whose members are
causally irrelevant for . That is, intervening by
in any partial model
uninformative on does
not make true (under
all interventions by actuality). Then, by the minimality condition,
would not be a cause, contrary to
our assumption. Thanks to this condition, causally irrelevant factors
cannot simply be added to genuine causes.25
How fare the counterfactual accounts with respect to sets of causes?
Let us consider the scenario of overdetermination. As explained in
section 4, Halpern’s (2015) account counts only the set of
individual causes as a genuine cause. The other counterfactual accounts
do not count this set as a cause. We think it is reasonable to recognize
both the individual causes and the set of these causes as a proper
cause. We would say that, for instance, two soldiers shooting a
prisoner, where each bullet is fatal without any temporal precedence, is
a perfectly fine cause for the death of the prisoner. The shooting of
the two soldiers brings about the death of the prisoner.
The account of Hitchcock (2001)
does not admit causes that are sets of variables. Hence, the set
containing the two individual causes does not count as a cause in the
scenarios of overdetermination and conjunctive causes. Unlike
Hitchcock’s account, the accounts due to Halpern and Pearl (2005) and Halpern (2015)
admit causes to be sets of variables. Still, these accounts do not
recognize the set containing the two individual causes as a cause in the
scenario of conjunctive causes. The accounts share the same minimality
condition according to which a strict superset of a cause cannot be a
cause. Hence, they are forced to say that, for instance, the drought
together with the lightning is not a cause of the forest fire
because one of these events (and indeed both) already counts as
a cause for this effect. This reason for why the set is not a cause is a
little odd.
5 Comparison
In this section, we compare our analysis to the considered
counterfactual accounts. First, we focus on the results of the different
accounts. Then we compare—on a conceptual level—our analysis to the
counterfactual accounts that rely on causal models.
5.1 Results
The results of our analysis and of the considered counterfactual
accounts are summarized in the following table. We abbreviate the
accounts of Lewis (1973),
Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015) by
’73, Hitch’01, HP’05,
and H’15, respectively.
Causes of or
’73
Hitch’01
HP’05
H’15
Author(s)
Overdetermination
–
Conjunctive Causes
Early Preemption
Late Preemption
–
Switches
–
Prevention
Double Prevention
E. Double Prevention
–
None of the counterfactual accounts listed in the table provides the
intuitively correct results for the simple and “realistic” switching
scenarios and extended double prevention. Lewis’s (1973) account misclassifies and as causes of , respectively, because of the
transitive closure he imposes on the step-wise and non-backtracking
counterfactual dependences. And without imposing transitivity, his
analysis of causation cannot solve early preemption. For Halpern (2015),
Hitchcock (2001)
and Halpern
and Pearl (2005), the reason for the
misclassification is that they allow for actual contingencies. And if
they were not to allow for such, their accounts would fail to solve
preemption. The counterfactual accounts due to Hitchcock (2001)
and Halpern
and Pearl (2005) solve overdetermination, but only
by allowing for even non-actual contingencies.
We have thus shown that the sophisticated counterfactual accounts
fail to capture the set of overdetermination, preemption, switches, and
extended double prevention. And they fail for a principled reason: they
can solve overdetermination and preemption only if they allow for
contingencies. But, by allowing for contingencies, they fail to solve
the switching scenario and extended double prevention. If they were not
to allow for contingencies, they would solve the switching scenario and
extended double prevention, but it would be unclear how they could solve
overdetermination and preemption. Our analysis, by contrast, does not
fall prey to such a principled problem.
Let us summarize the verdicts about the results, where , and ! stand for correct, false,
and partially correct, respectively.
Causes of or
’73
Hitch’01
HP’05
H’15
Author(s)
Overdetermination
!
Conjunctive Causes
!
!
!
!
Early Preemption
Late Preemption
Switch
Prevention
Double Prevention
E. Double Prevention
There remains another problem to be solved. The problem concerns any
account that relies on simple causal models which only factor in
structural equations and values of variables (or our sets of literals).
Such accounts face pairs of scenarios for which our causal judgments
differ, but which are structurally indistinguishable. Overdetermination,
for instance, is isomorphic to bogus prevention. In bogus prevention, an
event would prevent another event
. But, as it is, there is no event
present that would bring about
in the first place. Hence, the
preventer and the absence of
overdetermine that does not occur. By contrast to
overdetermination, however, the preventer is intuitively not a cause of the
absence . Since the accounts
of Hitchcock (2001)
and Halpern
and Pearl (2005) consider only structural equations
and the values of variables, they cannot distinguish between and one of the causes in
overdetermination. The former must be falsely classified to be a cause
if the latter is correctly classified so.26
And our analysis has the same problem.27
Hitchcock (2007a),
Hall (2007), Halpern (2008),
Halpern
and Hitchcock (2015), and Halpern (2015)
all aim to solve the problem of isomorphism by taking into account
default or normality considerations. This additional factor gives
considerable leeway to solve some of the isomorphic pairs. However,
actual causation does not seem to be default-relative, as pointed out by
Blanchard
and Schaffer (2017). They also show that the accounts
amended by a notion of default still face counterexamples and even
invite new ones. Nevertheless, the problem of isomorphism suggests that
simple causal models ignore a factor that impacts our intuitive causal
judgments. We think this ignored factor are not default considerations,
but a meaningful distinction between events that occur and events that
do not. After all, a distinction between events and absences seems to be
part of the structure of causation. Yet current accounts relying on
causal models are blind to such a distinction.
Our analysis of causation is thus incomplete. We need to amend it by
a meaningful distinction between events and absences, which allows us to
tackle the problem of isomorphism. More generally, we miss an account of
what constitutes an appropriate causal model. That is, an account that
tells us which causal models are appropriate for a given causal
scenario. For now, we have just assumed that the causal models obtained
from simple neuron diagrams are appropriate. This assumption already
smuggled in certain metaphysical assumptions about events. We will
elaborate these underpinnings of our analysis elsewhere.
5.2 Conceptual Differences
Let us compare—on a more conceptual level—our analysis to the
counterfactual accounts that likewise rely on causal models. As we have
seen, these sophisticated counterfactual accounts analyse actual
causation in terms of contingent counterfactual dependence relative to a
causal model. Hitchcock (2001),
Halpern and
Pearl (2005), and Halpern (2015),
for instance, have put forth such accounts. All of these accounts have
in common that the respective causal model provides full information
about what actually happens, and what would happen if the state of
affairs were different. Hence, causal models allow them to test for
counterfactual dependence: provided and are actual in a causal model, would
be actual if were? If so, counterfactually depends on ; if not, not.
The mentioned accounts put forth more elaborate notions of
counterfactual dependence. These notions specify which variables other
than and are to be kept fixed by intervention
when testing for counterfactual dependence. The accounts ask a test
question for contingent counterfactual dependence: relative to a causal
model, where and are actual, would be actual if were under the contingency that
certain other variables are kept fixed at certain values? If so, counterfactually depends on under the contingency; if not, not. To
figure out whether is a cause of
, counterfactual accounts
propagate forward—possibly under certain contingencies—the effects of
the counterfactual assumption that a putative cause were absent.
We analyse, by contrast, actual causation in terms of production
relative to a causal model that provides only partial information. More
specifically, our analysis relies on models that carry no information
with respect to a presumed effect : they are uninformative as to whether
or not the event or absence is
actual. Such uninformative models allow us to test whether an actual
event or absence is actually produced by another. The test question goes
as follows: in a model uninformative on , will become actual if does? If so, is a producer of ; if not, not. And a producer is then a cause of if would not also be a producer of .
Our test has no need that
becomes actual if were
actual. Instead the question is whether, in an uninformative model, an
actual event produces (and makes a weak difference to) another in
accordance with what actually happened. The novelty of our account is
not so much to consider actual production, but to consider production in
a causal model that is uninformative on the presumed effect. As a
consequence, when testing for causation, we never intervene on a causal
model, where the set of actual literals is complete. This stands in
stark contrast to counterfactual accounts which always intervene on
causal models, where each variable is assigned a value.
On our analysis, is a cause of
only if produces under all interventions by
actuality. There is a mentionable symmetry to Halpern’s (2015)
account which allows only for actual contingencies. On this account,
is a cause of if there is an intervention by
actuality such that the actual
counterfactually depends on the actual .28 Production under all
interventions by actuality is necessary for causation on our
account, whereas counterfactual dependence between actual events under
some intervention by actuality is sufficient on Halpern’s.
Counterfactual notions of causation generally say that a cause is
necessary for an effect: without the cause, no effect. By contrast, our
notion of causation says that a cause is sufficient for its effect given
certain background conditions. The background conditions are given by
the partial set of literals of the causal model that is uninformative on
the effect. That is, these conditions are jointly not sufficient for the
effect given the structural equations. However, together with a genuine
cause these conditions are jointly sufficient for the effect (given the
same structural equations). Relative to the causal model uninformative
on the effect, a cause is thus necessary and sufficient for its
effect.29
6 Conclusion
We have put forth an analysis of actual causation. In essence, is a cause of just in case and are actual, and there is a causal model
uninformative on in which actually produces , and there is no such uninformative
causal model in which would
produce . Our analysis
successfully captures various causal scenarios, including
overdetermination, preemption, switches, and extended double prevention.
All extant sophisticated counterfactual accounts of causation fail to
capture at least two of the causal scenarios considered. With respect to
this set, our analysis is strictly more comprehensive than those
accounts.
The sophisticated counterfactual accounts, which rely on causal
models, run into problems for a principled reason. They fail to solve
the switching scenario and extended double prevention because they allow
for possibly non-actual contingencies when testing for counterfactual
dependence. Such contingencies are needed to solve the problems of
overdetermination and preemption. Our analysis, by contrast, is neither
premised on counterfactuals of the form ‘If , then ’, nor on considering possibly
non-actual contingencies. Hence, our analysis escapes the principled
problem to which the sophisticated counterfactual accounts are
susceptible.
The present analysis of causation has a counterfactual counterpart
due to Andreas
and Günther (2021a). The counterfactual analysis
likewise relies on an information removal and uninformative causal
models. The gist is this: an event is a cause of another event just in case both events occur,
and—after removing the information whether or not and occur— would not occur if were not to occur. This analysis does
not rely on the strategy common to the sophisticated counterfactual
accounts, and is therefore also not susceptible to their principled
problem.
The two analyses largely come to the same verdicts. However, unlike
the present preliminary analysis, the preliminary counterfactual
analysis cannot identify the overdetermining causes in scenarios of
symmetric overdetermination. And while the present final analysis counts
the set as a cause in
the scenario of conjunctive causes, the final counterfactual analysis
does not. More importantly, the present final analysis does not count
“realistic switches” as causes, whereas the final counterfactual
analysis does. The present analysis has therefore a slight edge over its
counterfactual counterpart.
Appendix: The Framework of
Causal Models
In this appendix, we supplement the explanations of the core concepts
of causal models with precise definitions. Let be a set of propositional variables
such that every member of
represents a distinct event. is a propositional language
that is defined recursively as follows: (i) Any is a formula. (ii) If is a formula, then so is . (iii) If and are formulas, then so are and . (iv) Nothing else is a
formula.
As is well known, the semantics of a propositional language centers
on the notion of a value assignment. A value assignment maps each
propositional variable on a truth value. We can represent a value
assignment, or valuation for short, in terms of literals. The set yields the set of literals that
represents the valuation .
Definition 4(). Let be a valuation of the language . is the set of literals of such that, for any , (i) iff , and (ii) iff .
We say that a set of literals
is complete—relative to —iff there is a valuation
such that . If the language is obvious from
the context, we simply speak of a complete set of literals, leaving the
parameter implicit.
The function defines a
one-to-one correspondence between the valuations of and the complete sets of
literals. In more
formal terms, defines a
bijection between the set of valuations of and the set of the complete
sets of literals.
Hence, the inverse function of is well defined for complete sets
of literals. Using the inverse of
, we can define what it is for
a complete set of literals to
satisfy an formula
: where
stands for the
satisfaction relation of classical propositional logic. In a similar
vein, we define the semantics of a single structural equation: In simpler terms, satisfies the structural equation iff both sides of the equation
have the same truth value, on the valuation specified by . We say that a set of literals satisfies a set of structural equations and literals
iff satisfies each member in
. In symbols,
These two relations of satisfaction in place, we can say what it is
for a causal model to satisfy a Boolean formula .
Definition 5(). Let be
a causal model relative to . iff
for all complete
sets of literals such that
and .
The definition says that is
true in iff it
is true in all complete interpretations that extend and that satisfy . For complete models, the definition
boils down to iff or
.
There remains to define the notion of a submodel that is obtained by an intervention
on a model .
Definition 6(Submodel ). Let be a set of structural equations of the
language . Let be a consistent set of literals. is a submodel of iff:
A submodel has two types of
members. First, the structural equations of for those variables which do not occur
in . Second, the literals in . Hence, the syntactic form of a
submodel differs from the one
of a model . If is non-empty, the submodel has at least one member that is not a
structural equation but a literal. The satisfaction relation remains nonetheless well
defined. The reason is that has been defined for both a structural equation and an formula.
The authors contributed equally. We would like to thank Frank
Jackson, Philip Pettit, Katie Steele, Atoosa Kasirzadeh, Cei Maslen,
Alan Hájek, Phil Dowe, and Daniel Stoljar for helpful comments on this
paper. We are furthermore grateful to the anonymous reviewers for
Dialectica. We are happy for the opportunities to present parts
of this paper to the Philosophy Departmental Seminar at The Australian
National University, the 2019 Annual Conference of the New Zealand
Association of Philosophers, and the conference Bayesian
Epistemology: Perspectives and Challenges at the Munich Center for
Mathematical Philosophy.
Andreas, Holger and Günther, Mario. 2021b. “A Ramsey Test Analysis of Causation for Causal Models.”The British Journal for the Philosophy of Science 72(2): 587–615, doi:10.1093/bjps/axy074.
Blanchard, Thomas and Schaffer, Jonathan. 2017. “Cause without Default.” in Making a Difference. Essays on the Philosophy of Causation, edited by Helen Beebee, Christopher R. Hitchcock, and Huw Price, pp. 175–214. Oxford: Oxford University Press, doi:10.1093/oso/9780198746911.003.0010.
Collins, John David, Hall, Ned and Paul, Laurie A. 2004. “Counterfactuals and Causation: History, Problem, and Prospects.” in Causation and Counterfactuals, edited by John David Collins, Ned Hall, and Laurie A. Paul, pp. 1–57. Cambridge, Massachusetts: The MIT Press, doi:10.7551/mitpress/1752.003.0002.
Gallow, J. Dmitri. 2021. “A Model-Invariant Theory of Causation.”The Philosophical Review 130(1): 45–96, doi:10.1215/00318108-8699682.
Hall, Ned. 2000. “Causation and the Price of Transitivity.”The Journal of Philosophy 97(4): 198–222, doi:10.2307/2678390.
Hall, Ned. 2004. “Two Concepts of Causation.” in Causation and Counterfactuals, edited by John David Collins, Ned Hall, and Laurie A. Paul, pp. 225–276. Cambridge, Massachusetts: The MIT Press, doi:10.7551/mitpress/1752.003.0010.
Halpern, Joseph Y. 2008. “Defaults and Normality in Causal Structures.” in KR2008: Principles of Knowledge Representation: Proceedings of the Eleventh International Conference, edited by Gerhard Brewka and Jérôme Lang, pp. 198–208. Washington, D.C.: Association for the Advancement of Artificial Intelligence (AAAI), https://aaai.org/papers/kr08-020-defaults-and-normality-in-causal-structures/.
Halpern, Joseph Y. 2015. “A Modification of the Halpern-Pearl Definition of Causality.” in Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25-31 July 2015, edited by Quiang Yang and Michael J. Wooldridge, pp. 3022–3033. Menlo Park, California: The AAAI Press, https://www.ijcai.org/Proceedings/15/Papers/427.pdf.
Halpern, Joseph Y. 2020. “Axiomatizing Causal Reasoning.”Journal of Artificial Intelligence Research 12(1): 317–337, doi:10.1613/jair.648.
Halpern, Joseph Y. and Hitchcock, Christopher R. 2010. “Actual Causation and the Art of Modeling.” in Heuristics, Probability and Causality. A Tribute to Judea Pearl, edited by Rina Dechter, Héctor Geffner, and Joseph Y. Halpern, pp. 383–406. Tributes n. 11. London: King’s College Publications.
Halpern, Joseph Y. and Hitchcock, Christopher R. 2015. “Graded Causation and Defaults.”The British Journal for the Philosophy of Science 66(2): 413–457, doi:10.1093/bjps/axt050.
Halpern, Joseph Y. and Pearl, Judea. 2005. “Causes and Explanations: A Structural-Model Approach. Part I: Causes.”The British Journal for the Philosophy of Science 56(4): 843–887, doi:10.1093/bjps/axi147.
Hiddleston, Eric. 2005. “Causal Powers.”The British Journal for the Philosophy of Science 56(1): 27–59, doi:10.1093/phisci/axi102.
Hitchcock, Christopher R. 2001. “The Intransitivity of Causation Revealed by Equations and Graphs.”The Journal of Philosophy 98(6): 273–299, doi:10.2307/2678432.
Hitchcock, Christopher R. 2007a. “Prevention, Preemption, and the Principle of Sufficient Reason.”The Philosophical Review 116(4): 495–532, doi:10.1215/00318108-2007-012.
Hitchcock, Christopher R. 2007b. “What’s Wrong with Neuron Diagrams?” in Causation and Explanation, edited by Joseph Keim Campbell, Michael O’Rourke, and Harry S. Silverstein, pp. 69–92. Topics in Contemporary Philosophy n. 3. Cambridge, Massachusetts: The MIT Press, doi:10.7551/mitpress/1753.003.0006.
Hitchcock, Christopher R. 2009. “Structural Equations and Causation: Six Counterexamples.”Philosophical Studies 144(3): 391–401, doi:10.1007/s11098-008-9216-2.
Lewis, David. 1973. “Causation.”The Journal of Philosophy 70(17): 556–567. Reprinted, with a postscript (Lewis 1986b), in Lewis (1986a, 159–213), doi:10.2307/2025310.
Lewis, David. 1979. “Counterfactual Dependence and Time’s Arrow.”Noûs 13(4): 455–476. Reprinted, with a postscript (lewis_dk:1986j?), in Lewis (1986a, 32–51), doi:10.2307/2215339.
Lewis, David. 1986b. “Postscript to Lewis (1973).” in Philosophical Papers, Volume 2, pp. 172–213. Oxford: Oxford University Press, doi:10.1093/0195036468.001.0001.
Lewis, David. 2000. “Causation as Influence.”The Journal of Philosophy 97(4): 181–197, doi:10.2307/2678389.
Paul, Laurie A. 1998. “Problems with Late Preemption.”Analysis 58(1): 48–53, doi:10.1093/analys/58.1.48.
Woodward, James F. 2003. Making Things Happen. A Theory of Causal Explanation. Oxford Studies in the Philosophy of Science. Oxford: Oxford University Press, doi:10.1093/0195155270.001.0001.
Yablo, Stephen. 2002. “De Facto Dependence.”The Journal of Philosophy 99(3): 130–148, doi:10.2307/3655640.