Jump to main content or area navigation.

Contact Us

CADDIS Volume 1: Stressor Identification

Causal Assessment Background

Concepts of Causation

Causation is an ambiguous and contentious concept that is important to philosophers and to pure and applied scientists. As a result, causation has its own terminology, including arcane terms as well as common terms that are used in uncommon ways. These terms convey important concepts that should be considered when developing a method for causal analysis. This document explains how we view these concepts and how we address them within the CADDIS methodology (“we” is the three developers of the inferential approach in CADDIS). We define each term, explain our position concerning the concept, and provide a historical background for the concept. Throughout this module, causes and effects, as logical propositions or statistical parameters, are represented as C and E, respectively.

The following causal concepts are described here:

Agent causation Event causation Pluralistic causation
Analogy General causation Predictive performance
Associationist causation Hypothesis testing Probabilistic causation
Confounding Interaction Process connection
Counterfactual causation INUS Regularity
Covering law Manipulationist causation Rejectionist causation
Criteria, causal Mechanistic causation Specific causation
Deterministic causation Model based causation Teleological causation
Diagnosis Multiple causation Temporality
Directionality Network causation  

Agent Causation

(see also Event causation)

Definition

Agent causation is the concept that only “things” have the power to change the world (i.e., to be causes). Agent causation requires the specification of a “thing” that caused the effect. For example, the brick broke the window. Modern versions of agent causation are largely limited to purposeful agents that are self-directed (teleology). The brick did not break the window; the person throwing it did. Agent causation has been largely supplanted by event causation.

Our Position

We believe that the agent/event dichotomy in causal philosophy is analogous to the structure/function dichotomy in biology or the particle/wave dichotomy in quantum physics. They are different ways of looking at a phenomenon. Causal hypotheses (candidate causes) are often defined as agents (e.g., cadmium), but an event is at least implicit and should be described (e.g., aqueous exposure to cadmium) as well. This is because much of the logic of causality (e.g., time order and directionality) depends on describing causation as a relationship between events. In addition, the same agent may be the cause of opposite effects depending on whether it is removed (decreasing) or applied (increasing), which are different events. However, the agent/event distinction may become indistinct. For example, we may say that a storm caused benthic invertebrates to be scoured from the stream. The “storm” may be interpreted as an agent (perhaps even a named tropical storm) or as an event (the occurrence of a certain rate of precipitation for a certain duration). This definition of a cause as both an agent and an event or process has been termed dualistic causation (Campaner and Galavotti 2007). CADDIS assessments may define causes either way, depending on which is clearest and most natural to the case. However, in general, the causes in CADDIS should be defined as agents that participated in defined events.

Background

Aristotle and other ancient and preancient philosophers were concerned primarily with the nature of the agents that caused effects. The requirement that an agent be specified when defining a cause became an important issue in science when Newton proposed his theory of gravitation without specifying an agent that caused the force. Leibniz considered this a fatal flaw in the theory, but Newton famously refused to frame hypotheses. Currently, agent causation as a philosophy of science is largely limited to psychology. If humans have free will, they cause things to happen by acting as free agents—not as part of a sequence of causal events (Botham 2008, O'Connor 1995).

Analogy

Definition

Similar causes have similar effects. For example, if the impairment of concern involves a large and rapid decline in the abundance of aquatic insects, and if insecticides have been found to cause similar declines in other cases, then by analogy, that evidence supports an insecticide as the cause.

Our Position

Although analogy is potentially a useful method for generating evidence, it has been seldom used in our case studies. However, analogies can be used to identify candidate causes or to support a candidate cause. Analogies begin with a well-defined causal relationship that serves as a model (Cm caused Em in conditions Xm). If a similar effect Es occurs in a case with similar circumstances Xs, then, by analogy, anything similar to Cm is supported as the cause in that case.

Background

Literary analogies are at least as old as written literature, and some of these have been causal. The most famous use of analogy in scientific causation is Darwin's analogy between selection by animal breeders and the processes that have caused the evolution of life (i.e., artificial and natural selection). Analogy appears as one of Hill's (1965) criteria for causation in epidemiology. However, it has been sharply criticized. “Whatever insight might be derived from analogy is handicapped by the inventive imagination of scientists who can find analogies everywhere. At best, analogy provides a source of more elaborate hypotheses about the association under study; absence of analogies only reflects lack of imagination or lack of evidence” (Rothman and Greenland 1998). However, analogy has been formalized by various means in the field of artificial intelligence, where it is referred to as case-based reasoning. Case-based reasoning uses the following general process:

  1. Retrieve the most similar case(s) by comparing the new case to the library of past cases;
  2. Use the retrieved case to try to solve the current problem;
  3. Revise and adapt the proposed solution if necessary; and
  4. Retain the final solution in the library of cases.

Examples include diagnostic systems that retrieve past medical cases with similar symptoms and assessment systems that determine the values of variables by searching for similar implementations of a model.

Top of page

Associationist Causation

(see Regularity)

Definition

A cause and effect must be associated in space and time. If association does not occur, given allowance for time delays and action at a distance, causation can be rejected. Causation is inferred if the association is regular (i.e., occurs in all relevant cases). In practice, it is also inferred in single cases if the association creates a distinct impression of causality, particularly if a mechanism is apparent (e.g., one might infer that an anvil on someone's head is the cause of death, even if that association has not been witnessed regularly). Synonyms include conjunction and co-occurrence.

Our Position

In specific cases, CADDIS requires that the candidate cause and the effect be associated in space and time (allowing for time lags and movement during those lags). Lack of association can refute a cause. However, association is weak positive evidence, particularly in specific cases. Regular association in similar cases (e.g., other streams in the region) is used as evidence that the association is causal in general.

Background

Hume famously argued that we observe association and infer causation. That is, the association is real and open to the senses, but causation is only an inference that we draw from associations. Most of the subsequent literature on causation has been an elaboration of or a response to that argument.

Confounding

Definition

Confounding is a bias in the analysis of causal relationships due to the influence of extraneous factors (confounders). Confounding may result from a common cause of both the putative cause and the effect or of the putative cause and the true cause. A synonym is spurious correlation, but that term is broader.

Our Position

Confounding is a common problem in ecoepidemiological studies. For example, we may wish to determine the effect of flashy hydrology on stream communities, but, because flashy flow patterns are found in urban and suburban streams, flow is confounded by temperature, channel modification, lawn chemicals, and other factors. Assessors may treat confounders as background, attempt to correct for them, censor them from the data set, or use a multivariate model that treats them all as causes. All of these are options in CADDIS. However, all but the first option requires that the confounders be identified and quantified, which may not be possible in typical data-limited cases.

Confounding is reduced by random assignment of treatments in experiments. However, it can still occur due to unintended factors in experimental treatments or bias in the administration of treatments. This is a particular problem in field experiments. For example, if experimentalists spread shade cloth over streams to reduce temperature, the effects of reduced temperature would be confounded by effects of reduced light for photosynthesis and exclusion of avian predators. Many experimental studies of the effect of diversity on the productivity of ecosystems suffered from confounding of the manipulation of diversity (Huston 1997). Hence, assessors must be on the lookout for confounding in all sources of data.

Background

The first description of confounding in scientific studies occurs in Mill (1843). The solution of randomized experiments with controls was developed by Fisher (1937). Greenland et al. (1999b) defined confounding in the context of counterfactual theory and distinguished the usual definition from the related concepts, non-collapsibility and aliasing. Renton (1993) listed three ways to identify confounders: (1) other factors known to cause the effect may confound the cause of interest, (2) factors known to be frequently associated with the cause may be confounders, and (3) factors that are known to interact with the mechanism of the cause may be confounders.

Top of page

Counterfactual Causation

Definition

Had C not occurred, E would not have occurred, therefore, C must be a cause of E. For example, if the water had not been anoxic, the fish kill would not have occurred. This is the opposite of regularity of association as a definition of causation. A synonym is contrary to fact conditional.

Our Position

Counterfactual arguments have little direct applicability to causal analysis for cases of ecological impairment, so they are not discussed in the CADDIS methods. Because counterfactual arguments refer to a hypothetical state, we do not have counterfactual evidence concerning the causes of events that have occurred. That is, we do not have evidence of what would not have caused the impairment if it had not been present. For example, if a stream community is impaired and temperature is a candidate cause due to the lack of shading, we have no observation of that community without elevated temperature with which to evaluate the counterfactual case. Removing a candidate cause and observing the response is a manipulationist approach that does not directly address the counterfactual status of candidate causes of the specific observed impairment. It can answer the related counterfactual question: “Without the candidate cause, will the impairment continue?” Hence, it is relevant but indirect evidence. Counterfactuals can be directly evaluated in experiments and in experiments on models, as in Lewis's (1973) neural nets or Pearl's (2000) directed acyclic graphs (see Network Models).

Although counterfactual arguments are difficult when identifying causes, the manipulationist variant is inevitable when planning remedial actions and assessing risks and benefits. In practice, this analysis is performed using an exposure-response model. For example, if the suspended sediment concentration will be reduced to x/2 (a future equivalent of a counterfactual condition) from x, will the estimated number of taxa rise above the threshold for impairment?

Background

In the Enquiries, Hume proposed a counterfactual definition of causation but did not develop it as he did the associationist/regularist theory: “One object followed by another… where, if the first had not existed, the second had never existed.” Counterfactual arguments are inherently problematical because they depend on characterizing events that did not occur. This concept was revived relatively recently by Lewis (1973). The counterfactual definition is popular with philosophers because it seems to have fewer logical problems than regularist accounts of causation (Collins et al. 2004). However, there are conceptual objections as well as practical ones. One problem with the original alternative worlds approach is that it requires hypothesizing possible worlds in which C did not occur and demonstrating that in every one E did not occur. Clearly, defining an appropriate set of possible worlds presents difficulties, because, in general, a world without C would differ in other ways that are necessary to bring about the absence of C, which would have other consequences. Hence, Lewis developed the concept of similarity of worlds and of the nearest possible world. Also, counterfactual accounts of causation can result in paradoxes involving loss of transitivity, preemption and overdetermination (Cartwright 2007). For example, if the water had not been green, the DO sag that killed the fish would not have occurred (a good counterfactual argument). However, the green color was an effect of the algal bloom that caused the DO sag and was not itself a cause. Also, if two chemicals—each at lethal concentrations—are spilled into a stream, neither one is the counterfactual cause of the subsequent fish kill because even if one was absent the other would still have killed the fish. Menzies (2004) pointed out that counterfactual theories suffer from “the problem of profligate causes.” Many conditions must hold for a particular effect to occur, so which should be left out of the alternative world? Finally, to determine what would have happened in the counterfactual condition, philosophers appeal to causal laws or knowledge, so “counterfactual theories seem to require the knowledge they were intended to provide” (Wolff 2007).

In statistics, the potential outcomes analysis developed in the 1920s by Jerzy Neyman provided a method to analyze the difference between outcomes with and without a potentially causal factor (Rubin 1990). Holland (1986) demonstrated in his review of statistical approaches to causality that statistics can determine only the effects of causes (the difference between treatments) not the causes of effects, and it can do that only if homogeneity and independence of units can be assumed. That limits counterfactual statistics to replicated and randomized experiments. He also argues that attributes cannot be causes in the counterfactual sense. That is, we cannot say “Cheryl is empathetic because she is a girl,” because her gender could not be otherwise. Counterfactual causes must be things that could be, in principle, experimental treatments.

Susser (2001) stated that counterfactual analysis is “unattainable in practice” in epidemiology but may be approximated by Bayesian adjustments. Greenland (2005) considers Neyman's potential outcome model to be equivalent to the sufficient-component cause models that are used in epidemiology. However, he admits that there are problems (particularly confounding) with using a reference case as equivalent to the case of concern without the cause.

Pearl (2000) resolves many of the conceptual problems with counterfactual accounts of causality, by employing structural graphs and demonstrated that potential outcomes analysis and structural equation modeling are consistent. However, Pearl's approach still has limitations (Cartwright 2007). In addition, although Pearl showed that one can cleanly create a counterfactual condition in a model by “surgery on variables,” that does not resolve the problem of identifying real world counterfactual cases.

Covering Law

Definition

Causes are instances of the action of scientific laws in relevant circumstances. That is, a causal explanation consists of deduction from one or more laws and one or more facts concerning relevant conditions. Synonyms include the deductive-nomological model and the inductive-statistical model (for deterministic and probabilistic laws, respectively) as well as Hempel's model, the Hempel-Oppenheim model and the Popper-Hempel model.

Our Position

Laws are seldom available to causally explain events in the environment—except in trivial cases (e.g., the polluted water flowed between points A and B because of gravitation (the law) and the slope between the points [the fact]). CADDIS treats empirical generalizations derived from environmental data as “evidence”, rather than as “laws”.

Background

This model of scientific explanation is implicit in scientific practice dating back at least to Newton, who famously refused to frame a hypothesis of what gravity or any of the other physical variables in his laws might actually be. The law itself was sufficient. The formal development of the idea is attributed to Hempel (1965), who considered it a complete theory of causal explanation. It was popular in the mid-twentieth century with philosophers of science, but since then its limitations have been recognized (Woodward 2003). In particular, causation in biological and social systems is not too complex to be defined by scientific laws.

Top of page

Criteria, Causal

Definition

Criteria are considerations that are employed to assist judgment concerning causation. Synonyms include guidelines, postulates, and considerations.

Our Position

Evaluation of the evidence in terms of a set of considerations (commonly termed criteria) is the best available method for weighing multiple lines of evidence. However, CADDIS evaluates “types of evidence,” which we distinguish from sources of evidence, qualities of evidence, and characteristics of causation. We follow Susser and Fox in evaluating the degree to which evidence meets the criteria by a scoring system.

Background

Mill (1843) provided the first set of causal criteria. Koch's postulates (aka the Henle-Koch postulates) are a set of three or four criteria (depending on the version) that together constitute a standard of proof for infectious agents as causes of disease. The Surgeon General's Committee and Austin B. Hill developed criteria to demonstrate that the body of evidence supported cigarettes as a cause of lung cancer (Hill, 1965; US Department of Health Education and Welfare, 1964). Susser (1986a) extended Hill's criteria and added a scoring system. Many other authors, particularly epidemiologists, have developed lists of criteria, but these are the most often cited. Criteria have been adopted and adapted by ecologists for ecoepidemiology (U.S. EPA 1998, Fox 1991, Suter 1990, Suter 1998, U.S. EPA 2000).

Deterministic Causation

(see also probabilistic causation)

Definition

(1) Natural determinism is the position that the state of a system can, in principle, be fully explained by its state at the prior instant and knowledge of natural laws. (2) Causal determinism holds that the cause always induces the effect in appropriate conditions. However, because causation may be complex and nonlinear, causal determinism does not necessarily imply predictability.

Our Position

CADDIS is based on pragmatic determinism. In our conceptual approach, the cause determines the effect in the given context. We hold this position despite quantum indeterminacy, chaos theory, and uncertainty.

Quantum indeterminacy is the only source of true randomness, and it is irrelevant to us. Phenomena at our level are buffered from quantum indeterminacy—apparently by the effect of large numbers. We can predict and manipulate causal events at macro levels, because they are determinate.

Chaotic systems are effectively unpredictable because of imperfect knowledge of initial conditions and the properties of nonlinear systems that amplify small differences in conditions. However, there is no indeterminacy in chaotic models.

Determinism does not mean that we are not uncertain—only that our uncertainty is not due to inherent randomness. Our uncertainty is due to lack of knowledge—not a property of the system. Hence, if the cause does not consistently induce the effect, it is because we have incompletely specified the cause or the set of conditions in which it is effective.

Background

In the Physics, Aristotle stated that whatever “we ascribe to chance has some definite cause.” However, his determinism was associated with his teleology. A cause must induce its effect to fulfill its purpose.

Galileo, in the Dialogo sopra i due massimi sistemi del mondo (1632), rejected Aristotle's teleology to present the first scientific concept of causation and presented a mechanistic and apparently deterministic theory of causation. When necessary conditions occur, the effect necessarily follows.

Hume argued in the Treatise that lack of regular association was due to hidden or unknown factors rather than chance. “What the vulgar call chance is nothing but a secret and concealed cause.”

The most famous statement of determinism is found in Laplace's Philosophical Essay on Probabilities (1820):

We ought to regard the present state of the universe as an effect of its antecedent state as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present in its eyes.

This predictability is purely hypothetical, and we are not interested in determining the state of everything in the universe—only a relatively small system, but Leplace made the issue of determinism explicit.

Science is now in the situation of believing that the universe is fundamentally both deterministic (relativity theory) and probabilistic (quantum mechanics). In addition, the development of chaos theory showed that even if the universe is driven by deterministic laws, it can appear to be probabilistic.

Top of page

Diagnosis

Definition

Diagnosis is the identification of a cause by recognizing characteristic signs and symptoms. Differential diagnosis is identification of a disease by comparing all diseases that might plausibly account for the known symptoms.

Our Position

Diagnosis is one of the three types of inference in the Stressor Identification and CADDIS protocols (the others being rejection and weighing of evidence). It is used primarily to examine the cause of death in fish kills. Community diagnostics have been proposed in the literature that would identify causes of impairment based on changes in community composition, but so far they are best used to suggest candidate causes.

Background

The diagnosis of disease based on characteristic symptoms is as old as the practice of medicine. The first fully natural theory of disease and diagnosis comes from the Hippocratic treatises. The current practice of medicine is based on the approach developed by William Osler in the late 19th century. It focuses on a developing a diagnosis based on an algorithmic analysis of symptoms and the generation of symptoms through testing. Archibald Garrod extended diagnosis to include individual biochemical and genetic differences. In the last few decades, a theory of diagnosis has been developed within the field of artificial intelligence that is used in diagnostic expert systems (Reiter 1987). In addition, new diagnostic symptoms are being developed based on genomics, metabolomics, and proteomics.

Diagnostic protocols for nonhuman animals and plants are available in the veterinary, wildlife, fishery, and plant pathology literatures. For example, diagnostic criteria for lead poisoning in waterfowl include a hepatic lead concentration of at least 38 ppm and four characteristic symptoms (Beyer et al., 1998). Examples for chemically induced fish kills, from Norberg-King et al. (2005), are

Symptom Possible Causative Agent
White film on gills, skin, and mouth Acids, heavy metals, trinitrophenols
Sloughing of gill epithelium Copper, zinc, lead, ammonia, detergents, quinoline
Clogged gills Turbidity, ferric hydroxide
Bright red gills Cyanide
Dark gills Phenol, naphthalene, nitrite, hydrogen sulfide, low oxygen
Hemorrhagic gills Detergents
Distended opercules Phenol, cresols, ammonia, cyanide
Blue stomach Molybdenum
Pectoral fins in extreme forward position Organophosphates, carbamates
Gas bubbles (fins, eyes, skin, etc.) Gas supersaturation

In some cases, diagnostics syndromes have been identified as a result of ecoepidemiological studies. Perhaps the best known case is the Great Lakes embryo mortality, edema, and deformity syndrome (GLEMEDS) (Gilbertson et al. 1991). This syndrome has been identified in multiple species of fish-eating birds and has been associated with dioxin-like compounds, but it is characterized by more symptoms than the laboratory-induced effect of dioxin—chick edema syndrome.

Munkittrick and Dixon (1989a, 1989b) proposed a system to diagnose the causes of declines in fish populations. The causes are defined as a set of standard causal mechanisms and the diagnostic criteria were based on a set of metrics commonly obtained in fishery surveys. This method was subsequently refined and expanded (Gibbons and Munkittrick 1994), applied to assessments of Canadian rivers (Munkittrick et al. 2000), and incorporated into the causal analysis component of the Canadian Environmental Effects Monitoring Program (Hewitt et al. 2005). Numerous metrics contribute to the symptomology, but they are condensed to three responses: age distribution, energy expenditure, and energy storage. The most recent list of types of causes is exploitation, recruitment failure, multiple stressors, food limitation, niche shift, metabolic redistribution, chronic recruitment failure, and null response.

Many investigators have attempted to perform community diagnostics by identifying changes in the composition of biotic communities that are symptomatic of particular causal agents. Those efforts has been reasonably successful for the organic loading that characterizes poorly treated sewage (see Hilsenhoff 1987). However, efforts to devise more general systems for community diagnostics have been less successful (see Chessman and McEvoy 1998, Norton et al. 2000, Riva-Murray et al. 2002, Yoder and Rankin 1995).

Directionality

Definition

The relationship between cause and effect is asymmetrical. Cold temperatures cause people to put on more clothes but putting on more clothes will not decrease the temperature.

Our Position

Directionality is a central concept in causal theories, but it is not useful in identifying ecological causes. That is, studies of a case do not generate evidence of directionality—only of the similar but distinct concept of temporal sequence (or time order). Directionality becomes a conceptual problem when mechanisms are ambiguous. For example, does having more friends make people happier or do inherently happy people attract more friends? This ambiguity is seldom a problem for ecological causes. Feedbacks may be problematical for general causation but not for actual specific cases. Attracting a friend makes a person happier, which attracts more friends, but a particular friendship is formed based on a particular condition and does not influence its own formation. Similarly, low dissolved oxygen may kill fish and decomposition of fish may lower dissolved oxygen, but a fish kill may not cause the dissolved oxygen sag that caused it. Hence, in actual cases, directionality is entrained with temporal sequence. In general causality the ambiguity can be avoided by defining the cause more specifically. For example, the lowering of dissolved oxygen by fish decomposition may be defined as a different causal process thereby restoring directionality without appealing to temporal sequence.

Background

Directionality is an ancient and fundamental concept that appears in most theories of causation. For example, Salmon (1984) argued that causes explain their effects and not vice versa. Similarly, Woodward (2003) pointed out that causes can be used to control their effects, but effects cannot, in general, be used to control their causes. However, it has not been well explained or even accepted by philosophers of science. Russell's (1957) attack on causality denied that physical laws were asymmetrical and hence denied causal directionality. Reichenbach (1958) argued that directionality is a characteristic of nature resulting from the temporal asymmetry imposed by the second law of thermodynamics. Pearl (2000) argued that directionality may also have psycholinguistic roots. Pearl has made directionality a fundamental characteristic of his causal models, directed acyclic graphs, but the directionality must be input. As with regression models, the correlational structure in a data set will equally support either causal direction.

Top of page

Event Causation

(see also Agent Causation)

Definition

Event causation follows Hume in stating that causation is a connection between one event and another. For example, the striking of the window by the brick (causal event) caused the breaking of the window (effect event). Events include states or standing conditions as well as discrete happenings. An event may be defined as an object assuming a property at a time (Kim 1993).

Our Position

CADDIS is concerned with identifying causal connections between events. For example, a pre-dawn dissolved oxygen sag caused a fish kill. However, descriptions of the events (i.e., pre-dawn sag and kill) must include agents (i.e., dissolved oxygen and fish), and, in many circumstances, agent causation is easier to express and analyze. In a causal assessment, we consider both evidence of events (e.g., time order) and evidence concerning the agents and entities involved (e.g., the dissolved oxygen content of water).

Background

Hume's event causation has largely replaced agent causation in modern philosophy. For example, the philosopher of science, Mario Bunge (1979) wrote “the causal relation is a relation among events.”

General Causation

(see also Specific Causation)

Definition

Causation is a consistent relationship (either invariant or probabilistic) between instances of the specified cause and effect. Synonyms include type-level causation, property causation, and causal laws.

Our Position

CADDIS is concerned with general causal relationships only to the extent that they provide evidence for specific causal analyses. That is, if a general exposure-response relationship from laboratory tests or field data is consistent with site data, that evidence supports the candidate cause. However, site conditions may make laboratory or regional relationships irrelevant. This is particularly true of impaired systems, which are likely to have causal relationships that are not represented by regional data.

Background

see Specific Causation)

Hypothesis Testing

Definition

Hypothesis testing is a statistical technique that uses experimental data to determine whether a hypothesis is incorrect. Most commonly, a hypothesis of no effect is tested by determining whether data as extreme as those obtained in an experiment or more extreme, would occur with a prescribed low probability given that the null hypothesis is true.

Our Position

Hypothesis testing is applicable only to experimental studies in which independent replicate systems are randomly assigned to treatments. Observational data, such as those from environmental monitoring studies, are inappropriate for hypothesis testing. Replicate samples in such studies are pseudoreplicates. Pseudoexperimental designs such as BACI can reduce—but not eliminate—the likelihood that the study will be confounded (Stewart-Oaten 1996).

Even when experiments are used as supporting evidence, hypothesis testing is undesirable in CADDIS or any other assessment of environmental causes. The null hypothesis is meaningless, because all environmental variables that would be considered in a causal assessment have some effect that would be “significant” if enough samples were taken. We are interested in determining the relationship between the cause and effect (e.g., estimating a concentration-response relationship from test data), not in rejecting the hypothesis that the cause had no effect

Background

Statistical hypothesis testing was developed by Fisher (1937) to test causal hypotheses such as, does fertilizing with sulfur cause increased alfalfa production, by asking whether the noncausal hypothesis is credible given experimental results. Neyman and Pearson (1933) improved on Fisher's approach by testing both the noncausal and causal models. Fisher's probabilistic rejection of hypotheses became even more popular as Popper's rejectionist theory of science caught on in the scientific community (see Rejection). It came to be taught as the standard form of data analysis in biostatistics courses. As a result, it was applied to test causal hypotheses in inappropriate data sets, including those from environmental monitoring programs. A fundamental conceptual flaw in this practice was pointed out by Hurlbert (1984), who invented the term pseudoreplication to describe the practice of treating multiple samples from a system as if they were from replicate systems. More fundamentally, hypothesis testing does not allow one to accept a causal hypothesis, does not indicate the strength of evidence for a causal hypothesis, and is based on the probability of the data given a hypothesis rather than the probability of the hypothesis given the data. Numerous critiques of hypothesis testing have demonstrated its flaws, but they have had little impact on environmental scientists (Anderson et al. 2000, Bailar 2005, Germano 1999, Johnson 1999, Laskowski 1995, Richter and Laster 2005, Roosenburg 2000, Stewart-Oaten 1995, Suter 1996, Taper and Lele 2004).

Top of page

Interaction

(see also Mechanism and Process)

Definition

The cause physically interacts with the affected entity in a way that induces a change (the effect).

Our Position

Causes induce their effects through a physical interaction with the affected entity. The interaction may be described as a process or set of mechanistic events. For example, metals bind to ligands, reducing nutrient element uptake by ion channels, and elevated temperatures denature enzymes, reducing reaction rates. The evidence for interactions is still inferred from associations, but evidence of a mechanism of interaction at a lower level of organization strengthens the inference. Hence, in CADDIS, Evidence of Exposure or Biological Mechanism may strongly support causation.

Background

Hume famously argued that all we know of causation is regular co-occurrence from which we infer an interaction. While association has been considered sufficient for many scientific purposes (e.g., using empirical correlations), much of the development of science can be described as attempts to provide causal explanations that go beyond association. Newtonian physics and Newton's successors seemed to promise explanations in terms of covering laws (e.g., the force with which the apple hit the ground is caused by laws that cover the fall of apples as a particular case). However, there are no laws to cover most causal relationships of interest— even fairly fundamental ones like protein synthesis. The most common alternative source of causal explanations is reductionism. That is, the causal relationship is explained by processes and events that are more fundamental than the relationship itself. For background on these approaches, see Mechanism and Process Connection.

INUS

Definition

A cause is an Insufficient but Necessary part of a condition which is, itself, Unnecessary but Sufficient to result in the effect. Synonyms include component causes model and sufficient component causes model.

Our Position

This is an important part of CADDIS’s concept of causality. For example, if a release of copper “causes” a fish kill, that copper is insufficient because other conditions such as low pH, low dissolved organic matter, the presence of fish, the susceptibility of the fish, etc. must also occur. However, copper is necessary because the kill would not occur with only the other conditions. Further, although that set of conditions (copper and the others) is sufficient, it is unnecessary, because other sets of conditions could result in a fish kill. The identified cause is distinguished from the other conditions in the set by being the last to occur, by being the least common, by being anthropogenic, by being of regulatory concern, or by some other criterion.

Background

The INUS concept was developed by Mackie (1965, 1974). It is a formalization of the concept that a cause results in an effect only under appropriate conditions, which dates back at least to Hume. However, prior authors tended to treat other conditions as an unchanging background. Others, like Mill (1843), considered all preceding and contributing events and conditions to be causal. Mackie treated some conditions as background but others as variables that must be analyzed along with the nominal cause. He called the background the causal field, and the cause and other modeled conditions are the INUS conditions. Like other formal definitions of causation, INUS fails to describe some sorts of causation and creates illogical results for some conditions (Cartwright 2007, Pearl 2000).

This concept, in less formal or complete terms occurs in other writings on causation. For example, Rothman (1986) has argued that causes are components of alternative minimal sufficient sets (his sufficient component causes model of causation).

Olsen (2003) argued that the INUS/sufficient component causes concept reconciles determinism with the practice of expressing epidemiological causes as probabilities. That is, probabilities are due to unknown or unmeasured component causes. However, others have argued that the determinism of this concept is unjustified and that it is better to accept inherently probabilistic causes than to hypothesize unknown component causes (Parascandola and Weed 2001).

Top of page

Manipulationist Causation

Definition

Manipulationist causation is the proposition that we know that a causal relationship exists when we have manipulated C and observed a response E. Further, in cases of a network of multiple factors that jointly affect E, a manipulationist says that the cause is the thing that we manipulate. Symbolically, we distinguish interventional probabilities P(E|do C) from the simple conditional probability P(E|C). Intervention is often a synonym for manipulation although some authors distinguish the two.

Our Position

The goal of CADDIS is to identify causes that may be manipulated to restore the environment, so our causes are at least potentially manipulationist causes. Further, manipulations (both experiments and uncontrolled interventions) can provide particularly good evidence of causation. However, we do not require evidence from manipulations to identify the most likely cause.

Background

Ducheyne (2006) argued that Galileo was the first manipulationist. However, because Galileo's writings on this are ambiguous, the evidence is based primarily on his experimental practices.

Hume believed that the concept of causation arose from people's experience in manipulating things (causes) to achieve ends (effects).

Experiments are controlled manipulations, and Mill is the first philosopher of science to clearly argue the priority of experiments over uncontrolled observations. In A System of Logic, Deductive and Inductive (1843), he wrote “... we have not yet proved that antecedent to be the cause until we have reversed the process and produced the effect by means of that antecedent artificially, and if, when we do, the effect follows, the induction is complete ...”

Since Mill (and particularly since Fisher), experimental science has become the most reliable means of identifying causal relationships. However, when we extrapolate from experimental results to the uncontrolled real world, we run into the same old problem of induction. That is, we have no reliable basis for assuming that the causal relationship that we see in an experiment will hold in a real-world case. In fact, the problem is worse because experimental systems are usually simplifications of the real world. In addition, because of the complexity of ecological systems, the manipulations themselves may be confounded. For example, some experiments to determine whether diversity causes stability have actually revealed effects of fertility levels, nonrandom species selection, or other “hidden treatments” (Huston 1997).

Contemporary philosophers who support manipulationist theories of causation have run into criticisms that the theories are circular, because they make manipulation more fundamental than causation, but manipulation is inherently causal. Further, the concept of manipulation seems anthropocentric. However, these criticisms may be avoided by treating manipulation as a sign or feature rather than a definition of causation and by allowing natural manipulations and even hypothetical manipulations (Woodward 2003).

Pearl (2000) models causal relationships as networks with nodes connected by equations. Manipulation of the networks is performed through “surgery on equations.” This mathematical version of manipulation allows analysts to estimate intervention probabilities from data concerning conditional probabilities from observations.

Mechanistic Causation

(see also Interaction and Process Connection)

Definition

The mechanism is the physical means by which a cause induces the effect. The physical mechanism can be thought of as a series of events at a lower level of organization than the cause and effect events. In other words, “effects are produced by mechanisms whose operations consist of ordered sequences of activities engaged in by their parts” (Bogen 2004).

Our Position

The term mechanism is used in CADDIS as it is in toxicology, pharmacology, and other biological fields to describe the events at a lower level of organization that connect the cause and effect (the mechanism of action). For example, salmon perceive the lack of gravel, which changes their brain state, resulting in a change in behavior, and eggs are not deposited. In sum, a mechanistic analysis of a causal relationship is reductionistic. Every causal relationship in an ecological system can be reduced to a set of events involving entities at a lower level of organization. Because causes have physical mechanisms, observations of the products of a mechanism (e.g., low blood cholinesterase levels) or even the plausibility of a mechanism can be important evidence. However, some interactions are more readily defined as processes rather than as a series of events (i.e., process causation).

Knowledge of mechanisms has at least three uses in CADDIS:

  1. Mechanisms provide a description of a causal relationship at a lower level of organization, which, if they are consistent with established science, increase the credibility of the relationship (i.e., Mechanistally Plausible Cause). Actual measurements of components of the mechanism provide even stronger evidence.
  2. Mechanistic knowledge at the same level of organization as the hypothesized causal relationship can fill in missing steps in the causal pathway (e.g., increased algal production is a step in the sequence between nutrient releases and low dissolved oxygen, but not a step in the sequence from organic matter releases to low dissolved oxygen).
  3. Knowledge of mechanisms allows the prediction of previously unobserved effects of a hypothesized cause.

Background

Enlightenment philosophers such as Leibniz and Laplace were metaphysical mechanists in that they considered the universe to be driven by physical interactions between entities. Newton's theories, which involved action at a distance, supplanted that concept in physics, and the rise of quantum mechanics further diminished the concept of mechanism in physics.

A known plausible mechanism has become one of the criteria for judging an empirical association to be causal in statistics (Mostellar and Tukey 1977) and epidemiology (Hill 1965, Russo and Williamson 2007, Susser 1986a).

An alternative to the concept of mechanism presented here is the definition of mechanism as the chain or network of events that precede the effect (Pearl 2000, Simon and Rescher 1966). A conceptual problem with the concept of mechanisms of a cause is determining when is an event a mechanism for a cause and when may it be considered a cause itself? Simon and Rescher formally addressed this problem in terms of the causal ordering (i.e., causal directionality) of a series of equations that define the mechanism. If we have a series of variables Vi that are dependent variables, then the last variable in the series that can be solved without solving for any of its successors can be treated as the cause. This definition is effectively the opposite of the definition of mechanism used by us. That is, the mechanism is the series of events that lead up to—and include—the cause. The events between the cause and effect (which are the mechanism in our sense) can be ignored because, once an action has been taken, that action determines the causal event (the occurrence of the effect).

Top of page

Model Based Causation

Definition

The most likely cause is the one that, when mathematically modeled, best fits and therefore best explains the data.

Our Position

This is, in theory, a very useful method. However, it requires that all causes be understood sufficiently to determine models and that data be available to parameterize the models. “The fish kill was caused by an unknown pathogen” is a legitimate causal hypothesis, but it does not lend itself to model-based inference. In addition, to statistically compare these models and identify the most likely cause, the same data set should apply to all alternatives. Otherwise, the relative likelihoods may be due to differences in the data applied to the models rather than the models themselves. Finally, there must be enough data to allow the statistics to distinguish among the models. These conditions are often met for biological resource or pest management problems such as setting limits on fisheries but not for contaminated or disturbed sites. Hence, there are no examples of this method in CADDIS.

Background

This approach began with Peirce's concept of the weight of evidence, which was revived by Good (1950). The weight of evidence for a hypothesis expressed as a mathematical relationship is the log of its likelihood relative to the likelihood of alternative hypotheses. This Bayesian statistical approach for comparing models has been largely replaced by an information theoretic approach expressed as the relative magnitudes of Akaike's information criterion for each model (Anderson 2008).

Multiple Causation

Definition

The term multiple causation is applied to two distinct situations:

  1. Plural causation refers to situations in which multiple causes may induce a general effect. For example, many different causes induce the effect “impaired stream” in different streams. Plural causation results from broadly defined effects. If we define the effect as reduced brook trout abundance, the number of causes is reduced. If we further define it as reduced abundance in the first kilometer of Red Brook in 2002, there is only a single cause, but it may be complex. Hence, plural causation is an issue only when deriving general causal relationships, not for specific cases.
  2. Complex causation refers to situations in which the cause has multiple components. For example, a fish kill may be due to the interaction of high temperatures and low dissolved oxygen. This is a single but complex cause.

Our Position

CADDIS is concerned with causation in specific cases, so there is no plural causation. However, there are multiple candidate causes, which should be reduced by defining the effect as specifically as possible. Complex causation may be minimized by carefully defining the set of agents and events that must combine to induce the effect. All constituents of a complex cause that are necessary for the effect must be included—but not background conditions and trivial contributors (see INUS). The point of casual analyses in CADDIS is to determine a sufficient intervention to eliminate the effect, not to completely define the agents in a system and their interactions. See Listing Multiple Stressors as Candidate Causes.

Background

Galileo recognized that there may be multiple (i.e., complex) causes but argued that there will be a primary cause that should be distinguished. He seems to imply an additive model of combined effects. Mill (1843) argued that the real cause of an effect is all antecedent conditions. This extreme of complex causation is in a sense monist. That is, there is only one cause, which is everything that has happened. In a metaphysical sense, most philosophers seem to agree with Mill (Lewis 1973). For example, Salmon (1984) developed an account of objective causation based on the concept of “complete causal structure,” which includes the entire network of causal processes in a convex chunk of space-time such as the universe. However, other philosophers have developed various strategies for reducing this complexity to manageable but multicausal systems (Lewis 1973, Mackie 1974, Pearl 2000).

Network Causation

Definition

Network models represent causation graphically: nodes represent entities or states connected by arrows that represent models of individual causal processes or probabilities of the implied processes. The advantages of network models are that, unlike equations, they convey directionality and make explicit the structure of interactions in multivariate causal relationships. Empirical methods for analyzing causal networks include path analysis, structural equation models, and Bayesian network analysis. Alternatively, a network can be modeled mechanistically through mathematical simulation (e.g., systems of differential equations), but that is the old-fashioned field of systems analysis. Causal diagram theory, based on directed acyclic graphs, can be used to analyze complex causal relationships without parametric assumptions such as linearity that are required by structural equation modeling (Pearl 2000, Spirtes et al. 2000).

Our Position

Network models require that causal relationships be known or at least hypothesized. Analysis may be used to test the plausibility of the network structure or to determine the relative strength of the contributions of nodes in the network to the effect of interest, given the assumed causal structure. In general, the causes of ecological impairments are not sufficiently well known to confidently define the network for a particular case, and data are insufficient to quantitatively analyze the network. In addition, the application of network models to specific cases is problematical (Pearl 2000). However, general network models might be used like other general models to support the credibility of specific causal hypotheses. The conceptual models in CADDIS could potentially be subject to quantitative analysis, and we have explored both Bayesian analysis and structural equation modeling. We will continue to consider their utility.

Background

Analysis of causal networks began with Wright (1920, 1921), who developed path analysis (basically, a combination of directed graphs and regression analysis) to analyze the effects of genes and environment on phenotypes. It was first applied broadly by economists and social scientists (beginning with Herbert Simon), where data sets are often large and include quantification of multiple causal factors. However, the most important technical developments and the most influential texts on causal networks come from the field of artificial intelligence (Pearl 2000, Spirtes et al. 2000). Statistical analysis is now typically performed by an extension of path analysis called structural equation modeling. The techniques are now being applied in various fields, and their development is very active and controversial. However, even qualitative analyses of causal networks can help to identify potential confounders and aid in the design of studies (Greenland et al. 1999a).

Top of page

Pluralism

Definition

(1) Conceptual pluralism holds that causation has multiple distinct definitions that are all potentially legitimate and useful given different questions, bodies of evidence and contexts. (2) Ontological pluralism holds that there are multiple types of causes and of causation.

Our Position

We agree that there are multiple legitimate definitions of causation and multiple methods of causal analysis. The CADDIS approach is conceptually pluralistic in that we use evidence corresponding to all potentially relevant theories and definitions of causation. For example, we use evidence of association of C and E in the case, regularity of association in the region, and counterfactual evidence from experiments. However, we assume that in any real-world case, an effect has one cause (which may be complex) and that the different definitions are different representations of that actual relationship, not ontological alternatives.

Background

Since the late 1980s, many philosophers, led by Nancy Cartwright (2003), came to believe that none of the attempts to reduce causality to a particular definition (counterfactual, probability raising, etc.) could succeed. Whether we accept ontological pluralism or not, we can use evidence to investigate causal relationships by applying the most appropriate concept of causality. Russo and Williamson (2007) argue that epistemic pluralism (the application of conceptual pluralism to the development of causal information) applies to the health sciences in practice and that it subsumes ontological pluralism: “The epistemic theory of causality can account for this multifaceted epistemology, since it deems the relationship between various types of evidence and the ensuing causal claims to be constitutive of causality itself. Causality just is the result of this epistemology.” Causal pluralism has been reviewed, and types of causal pluralism have been defined by Campaner and Galavotti (2007) and Hitchcock (2007).

Predictive Performance

Definition

A causal hypothesis displays predictive performance if a prediction deduced from the hypothesis is confirmed by subsequent novel observations. Good predictive performance is considered by some to be an essential characteristic of a good scientific hypothesis or theory.

Our Position

Prediction is not a characteristic of causation or a causal theory, but Verified Prediction is one of the SI and CADDIS types of evidence. We believe that predicted observations are more powerful evidence than ad hoc causal explanations of existing observational data, because predictions can not be fudged. That is, one can invent an explanation for any observation after the fact to make it fit a preferred causal hypothesis, but, if a prediction is made before the observation, it cannot be changed afterward to fit the results.

Background

In the Philosophical Essays, Leibniz wrote that “It is the greatest commendation of a hypothesis (next to proven truth) if by its help predictions can be made even about phenomena or experiments not tried.” However, Mill (1843) argued that a consequence already known has the same power to support a hypothesis as one that was predicted. Schlick (1931) argued that the formation and verification of predictions provided a greater rigor for the regularity theory of causation. Susser (1986b) wrote that “When it clearly produces new knowledge, the a priori character of the prediction is strongly affirmative, the more so in that it provides little opportunity for post hoc reasoning and avoids many biases that lurk in situations of which the scientist has foreknowledge.” Lipton (2005) argued that there is no fundamental advantage to evidence from predictions; evidence is just as good if it was generated before as after the hypothesis. However, he identified two legitimate arguments for giving more weight to evidence that is predicted.

(1) The weak argument—The quality of evidence from predictions tends to be better because we design the study to test the prediction. In particular, you cannot choose a good control or reference unless you know what you are controlling for and you do not know that until you have framed the causal hypothesis.
(2) The strong argument—Even the same evidence is better if it is predicted because fudging is precluded.

Probabilistic Causation

(see also Deterministic Causation)

Definitions

  1. Metaphysical probabilism—Because of the inherent unpredictability of the world, effects can be predicted only as probabilities.
  2. Epistemological probabilism—Because of incomplete knowledge, causes are not determinate, but C is a cause of E if the occurrence of C increases the probability of E. That is, P(E|C) > P(E). This concept of causation is also referred to as probability raising.

Our Position

We are not metaphysical probabilists. We do not believe that the macrocosm (things bigger than atoms) is inherently random. Further, chaotic systems (e.g., those with nonlinear dynamics) are unpredictable but inherently deterministic. In practice, chaotic indeterminism is not distinguishable from other sources of noise in field data and does not significantly influence our ability to identify causes. Because this metaphysical position implies that additional data collection and modeling can decrease uncertainty and drive probabilities of causation toward zero or one, CADDIS recommends iterative assessment when results are unclear.

CADDIS does not suggest that probability raising constitutes a definition of causation because the apparent cause Co that is correlated with the effect may actually be correlated with the true cause Ct and methods to prevent confounding are unreliable. However, correlations and other expressions of the probability of association do provide evidence that can be useful in causal analyses.

Finally, epistemological probabilism is important in “population-level” causation if the members of the population (e.g., streams in a region) differ in ways that affect their susceptibility to the cause.

Background

Karl Pearson was the father of causal probabilism. In The Grammar of Science, 3rd edition (1911), Pearson argued that all we know of causation is the probability of association.

In his arguments against smoking as a cause of lung cancer, Fisher pointed out that, in observational studies, correlation does not prove causation. Confounding is always possible. A genetic trait may cause lung cancer and also make a person more susceptible to nicotine addiction, or nicotine craving may be a symptom of early stage lung cancer.

Some modern philosophers have argued that determinism is untenable and, therefore, probability raising (with temporal sequence) is the best definition of causation (Eells 1991a, Eells 1991b, Good 1961, Good 1983, Reichenbach 1958, Suppes 1970).

The major objection to probability raising is that it does not distinguish causal from non-causal associations (Holland 1986). Cartwright (2007) argued that probability raising is a sort of symptom of causation rather than a definition.

Top of page

Process Connection

Definition

Causal relationships result from interactions that are physical processes such as the exchange of energy or other conserved quantity (e.g., angular momentum between a flying baseball and a window) (Dowe 2000, Salmon 1998). Synonyms include process model, physicalist model, and mechanism.

Our Position

Many causes act through a physical process that exchanges some conserved quantity as in the philosophers' process theories of causation. For example, the transfer of solar energy to the sediment of a shallow stream is followed by a transfer of thermal energy to the water and then to fish, causing the fish to leave in search of cooler water. However, many causal relationships are not easily expressed as such an exchange (e.g., effects of fine sediment on lithophilic stream invertebrates). In such cases, it is more natural to speak of causal mechanisms rather than processes. When evidence of a process connection is available, it is treated as a variant of Evidence of Exposure or Biological Mechanism.

Background

Although Russell famously opposed the idea of causation, he attempted to develop a scientifically defensible theory of causation (Russell 1948). He defined causation as a series of events constituting a “causal line” or process. However, he did not distinguish between causal processes and pseudo-processes (Reichenbach 1958, Salmon 1984). The modern process theory of causation was developed by Salmon (1984, 1994, 1998). Salmon's causation originally involved spatiotemporally continuous processes that transmit an abstract property termed a mark. In response to Dowe (2000) he changed it to an exchange of invariant or conserved quantities such as charge, mass, energy, and momentum. However, some types of causation (e.g., blocking an event) are not causes in this theory, and causation in many fields of science are not easily portrayed as exchanges of conserved quantities (Woodward 2003). Numerous philosophers have published variants and presumed improvements on Salmon's and Dowe's process theory. Some psychologists and psycholinguists have adopted a version of the physical process theory of causation and argue based on experiments that people inherently assume that a process connection (their terms are force dynamics or the dynamics model) is involved in causal relationships (Pinker 2008, Wolff 2007). This causal intuition includes Salmon's and Dowe's physics but also implies by analogy, intrinsic tendencies, powers and even intentions.

Regularity

(see Associationist Causation)

Definition

Where and when the cause occurs, the effect always occurs. This definition is now usually modified by requiring that conditions be appropriate. This concept is also known as regularist causation or regularity theory. That is, the cause is regularly associated with the effect. However, association is a property of causation in a case, while regular association is a property of general causation.

Our Position

The CADDIS approach implies metaphysical but not epistemic regularity of causation. That is, when the full set of causal conditions occurs, the effect must occur, but we cannot rely on regular association of causes and effects in nature because of the complexity of conditions in nature that may modify or obscure a causal relationship.

Background

The regularity theory of causation is associated with Hume (1748). He defined “a cause to be an object followed by another and where all the objects similar to the first are followed by objects similar to the second.” The regularity theory dominated philosophies of causation until the development of counterfactual theory (Lewis 1973).

Rejectionist Causation

Definition

The only defensible form of scientific inference is to frame and reject hypotheses—including causal hypotheses. No amount of positive evidence can prove a hypothesis (at least one in the form of a scientific law), but, if all but one possible hypothesis can be rejected, the remaining hypothesis may be accepted. To use Popper's famous example, no number of observations of white swans can prove that all swans are white, but one black swan disproves it. Synonyms include falsification and refutation.

Our Position

Rejection is possible in specific cases if a cause is not possible in that case. Elimination of impossible or at least incredible candidate causes is one of the three methods of inference in the Stressor Identification and CADDIS methodology. For example, a cause that requires that contaminated water flow uphill (i.e., the source is downstream of the impairment) or that events in the past can be changed (e.g., the effects began before the cause was invented) can be eliminated. This method cannot identify a cause, but it can shorten the list of candidates. After that, positive evidence must be used.

However, rejection is seldom helpful in general causal analysis (the sort of inference proposed by Popper and Platt) for environmental effects, because most biological effects have plural causes. For example, a nonsmoker with lung cancer does not disprove the hypothesis that smoking causes lung cancer. Rejection can be useful for general causation in cases with very specific effects that have only one cause.

Background

Rejection is a relatively recent concept, but it has been extremely influential. Popper (1968) and Platt (1964) argued that induction is unreliable and that the only reliable inferences are deductive. That is, we frame a causal hypothesis, deduce an observable consequence of the hypothesis, and then test it in a way that makes it likely to fail. If the hypothesis is rejected by the test, another hypothesis is derived and tested in the same way. We cannot prove that a hypothesis is true; we can only tentatively accept the hypothesis that has withstood the strongest tests.

Popper's and Platt's arguments were popular in the 1970s, but they have less influence today. Most scientists and philosophers of science now accept the need to make inductions from positive evidence. Many scientists—but not Popper and Platt—have treated Fisher's tests of null hypotheses as an implementation of the rejectionist philosophy. However, if you allow probabilistic rejection, you may as well allow induction (Greenland 1988).

Top of page

Specific Causation

(see also General Causation)

Definition

  1. The concept of causation applies to specific events, not to general categories. That is, the cause of each event is unique.
  2. The concept that causal analysis for specific events must be different from analysis of general causes. Synonyms for specific causation include singular causation, actual causation, case causation, single-event causation and token-level causation.

Our Position

CADDIS is concerned with causes of specific effects in specific instances, and every instance of causation in ecosystems is, at some level, different. Although fine sediments have been shown to cause reduced trout abundance, that finding does not mean that reduced trout abundance at a site is due to fine sediments. However, the concept of general causation is useful and can contribute to the identification of specific causes. That is because we treat general causal relationships as supporting evidence from elsewhere—not as proof of causation.

Background

Most of the writings on causation are concerned with general causation, leading to useful generalizations and even scientific laws. Hume (1739) and others have argued that the problem with specific causation is that, without repetition, there is no basis for determining what aspects of an event are causal (i.e., necessary or sufficient). Hence, many philosophers have argued that we must derive general causal relationships and assume that instances follow general causal laws (i.e., covering laws). Singularists argue to the contrary that specific causal events are all we really know and that general relationships are unreliable abstractions (Armstrong 2004). Pearl (2000) argues that the distinction between specific causation and general causation is a matter of degree of detail in the causal model. “Thus, the distinction between type and token claims is a matter of degree in the structural account. The more episode-specific evidence we gather, the closer we come to the ideals of token claims and actual causes.” Mackie's (1965) INUS theory and Lewis's (1973) counterfactual theory were both developed to address specific causation.

Top of page

Teleological Causation

Definition

Teleological causation is the idea that causes are purposeful.

Our Position

Because CADDIS provides evidence of natural causes, we do not include teleological arguments.

Background

The earliest concepts of causation were teleological (Frankfort 1946). That is, events were believed to be caused by conscious agent (gods, humans, demons, spirits, etc.). Hence, causal explanation was a matter of assigning responsibility.

One of Aristotle's four types of cause is the final cause of a thing, which is its purpose (telos).

Teleological causation in science was dismissed by Galileo in the Dialogue Concerning the Two Major Systems of the World (1632). Teleology is now primarily associated with explaining human actions in psychology and the social sciences and with theology.

Temporality

Definition

Causes precede their effects. A particularly cogent explanation of the concept is provided by Renton (1993): “A cause is unable to produce its effect before it exists itself and is therefore, by definition, existentially prior to it.” Synonyms include time order and antecedence.

Our Position

Temporality is a necessary result of event causation. That is, an effect event cannot precede its causal event. There may be some confusion within agent causation. An affected entity may precede (i.e., be older than) its causal entity. For example, the coral may be older than the anchor that destroys it. However, when event temporality is violated, the candidate cause is refuted. For clarity, temporality is called Temporal Sequence in CADDIS.

Background

Temporality is on everyone's list of characteristics of causation. Most theories of causation also include temporality. For example, Suppes (1970) theory of probabilistic causality requires that a cause raise the probability of its effect and also that it precede the effect. However, in a process model of causation, events need not be invoked, and temporality may be expressed as the simultaneous involvement of the cause and effect in the physical process rather than as time order (Wolff 2007).

Top of page


Jump to main content.