Box 33 Yliopistonkatu 4 University of Helsinki, Finland. University homepage Contact Help. Researchers Petri K Ylikoski. Petri K Ylikoski Person. Research areas Sociology - science and technology studies, social theory, sociological theory, analytical sociology Philosophy - philosophy of science, philosophy of biology, philosophy of the social sciences, theory of explanation, theory of evidence.
Highlighted publications E-pub ahead of print. Mechanism-based theorizing and generalization from case studies Ylikoski, P. Three conceptions of a theory of institutions Aydinonat, N. Philosophy of the Social Sciences. Social Mechanisms Ylikoski, P. Routledge Handbooks in Philosophy. Methodological Individualism Ylikoski, P. Philosophical, Early Christians and Empirical Perspectives. Philosophical Studies in Science and ; vol. Self-interest, norms, and explanation Ylikoski, P. Routledge Studies in Contemporary Philosophy. External representations and scientific understanding Kuorikoski, J.
Norms, Actions and Networks. Essays in the Philosophy of Social Science. The problem of common cause arises in a situation where, for example, lightning strikes L the wooden framing W and causes it to burn E while also causing a short in the circuitry C. If lightning always causes a short in the circuitry, but the short never has anything to do with a fire in these situations because the lightning starts the fire directly through its heating of the wood, we will nevertheless always find that C and E are constantly conjoined through the action of the lightning, suggesting that the short circuit caused the fire even though the truth is that lightning is the common cause of both.
But in the situation with the lightning, the fact that short circuits have the capacity to cause fires makes it less likely that we will realize that lightning is the common cause of both the short circuits and the fires. We might be better off in the case where the lightning split some of the wood framing of the house instead of causing a short circuit.
In that case, p. For a Humean, the constant conjunction of split wood and fires suggests causation as much as the constant conjunction of short circuits and fires. Indeed, the constant conjunction of storks and babies would be treated as probative of a causal connection. Since it is not unconditionally true that splitting wood causes fires, the presumption is that some such conditions can be found to rule out this explanation. Unfortunately, no set of conditions seem to be successful.
Part of the problem is that there are many different types of causal laws and they do not fit any particular patterns. For example, one restriction that has been proposed to ensure lawfulness is that lawlike statements should either not refer to particular situations or they should be derivable from laws that do not refer to particular situations. Furthermore, by this standard, almost all social science and natural science laws e. In short, logical restrictions on the form of laws do not seem sufficient to characterize causality.
The regularity approach also fails because it does not provide an explanation for the asymmetry of causation. When the sun rises, the shadow is long, at midday it is short, and at sunset it is long again. Intuition about causality suggests that the length of the shadow is caused by the height of the flagpole and the elevation of the sun. But, using INUS conditions, we can just as well say that the elevation of the sun is caused by the height of the flagpole and the length of the shadow.
- Viola in Reel Life.
- Download options.
- Archimede e le sue macchine da guerra (Lampi di genio) (Italian Edition).
- Edited by Robert E. Goodin?
- Protégez votre système cardio-vasculaire (Poche) (French Edition)?
- In Search of Gentle Death: The Fight for Your Right to Die With Dignity.
There is simply nothing in the conditions that precludes this fantastic possibility. The only feature of the Humean approach that provides for asymmetry is temporal precedence. If changes in the elevation of the sun precede corresponding changes in the length of the shadow, then we can say that the elevation of the sun causes the length of the shadow. And if changes in the height of the flagpole precede corresponding changes in the length of the shadow, we can say that the height of the flagpole causes the length of the shadow. But many philosophers reject making temporal precedence the determinant of causal asymmetry because it precludes the possibility of explaining the direction of time by causal asymmetry and it precludes the possibility of backwards causation.
From a practical perspective, it also requires careful measures of timing that may be difficult in a particular situation. This discussion reveals two basic aspects of the causal relation. One is a symmetrical form of association between cause and effect and the other is an asymmetrical relation in which causes produce effects but not the reverse.
The Humean regularity approach, in the form of INUS conditions, provides a necessary condition for the existence of the symmetrical relationship, 24 but it does not rule out situations such as common cause and accidental regularities where there is no causal relationship at all. A great deal of what passes for causal modeling suffers from these defects Freedman ; ; ; The Humean approach does even less well with the asymmetrical feature of the causal relationship because it provides no way to determine asymmetry except temporal precedence. Yet there are many other aspects of the causal relation that seem more fundamental than temporal precedence.
Causes not only typically precede their p. Effects also depend upon causes, but causes do not depend upon effects. Thus, if a cause does not occur, then the effect will not occur because effects depend on their causes. However, if the effect does not occur, then the cause might still occur because causes can happen without leading to a specific effect if other features of the situation are not propitious for the effect.
For example, where a short circuit causes a wooden frame building to burn down, if the short circuit does not occur, then the building will not burn down. But if the building does not burn down, it is still possible that the short circuit occurred but its capacity for causing fires was neutralized because the building was made of brick.
This dependence of effects on causes suggests that an alternative definition of causation might be based upon a proper understanding of counterfactuals. For it is precisely this question which touches on the decisive element in the historical construction of reality: The philosopher David Lewis b has proposed the most elaborately worked out theory of how causality is related to counterfactuals. Then he considers the truth of a second counterfactual: If, for example, Bismarck decided for war in and, as some historians argue, German unification followed because of his decision, then we must ask: It deals directly with singular causal events, and it does not require the examination of a large number of instances of X and Y.
Peter Hedström & Petri Ylikoski, Causal Mechanisms in the Social Sciences - PhilPapers
The counterfactual approach, however, starts with singular events and proposes that causation can be established without an appeal to a set of similar events and general p. It only makes sense to evaluate the counterfactual in a world in which the premise is true. Thus, if X is a hammer blow and Y is a glass breaking, then the closest possible world is one in which everything else is the same except that the hammer blow does not occur. If in this world, the glass does not break, then the counterfactual is true, and the hammer blow X causes the glass to break Y. The obvious problem with this approach is identifying the closest possible world.
If X is the assassination of Archduke Ferdinand and Y is the First World War, is it true that the First World War would not have occurred in the closest possible world where the bullet shot by the terrorist Gavrilo Princip did not hit the Archduke? Or would some other incident have inevitably precipitated the First World War? To solve these problems, both approaches must be able to identify similar causes and similar effects.
The Humean approach must identify them across various situations in the real world. This approach is p. In addition to identifying similar causes and similar effects, the Humean approach must determine if the conjunction of these similar causes and effects is accidental or lawlike. This task requires understanding what is happening in each situation and comparing the similarities and differences across situations. This undertaking requires understanding the facts of the real world and the laws that are operating in it.
Consequently, assessing the similarity of a possible world to our own world requires understanding the lawlike regularities that govern our world. Lewis has substituted one difficult problem for another, but the reformulation of the problem has a number of benefits. The counterfactual approach provides new insights into what is required to establish causal connection between causes and effects.
The counterfactual approach makes it clear that establishing causation does not require observing the universal conjunction of a cause and an effect. The counterfactual approach proposes that causation can be demonstrated by simply finding a most similar world in which the absence of the cause leads to the absence of the effect. The ballot can be said to be causally associated with these mistakes p.
Ideally this closest possible world would be a parallel universe in which the same people received a different ballot, but this, of course, is impossible. The next-best thing is a situation where similar people employed a different ballot. In fact, the butterfly ballot was only used for election day voters in Palm Beach County. It was not used by absentee voters. Consequently, the results for the absentee voting can be considered a surrogate for the closest possible world in which the butterfly ballot was not used, and in this absentee voting world, voting for Buchanan was dramatically lower, suggesting that at least people who preferred Gore—more than enough to give the election to Gore—mistakenly voted for Buchanan on the butterfly ballot.
The difficult question, of course, is whether the absentee voting world can be considered a good enough surrogate for the closest possible world in which the butterfly ballot was not used. To do this, we can ask whether election day voters are different in some significant ways from absentee voters, and this question can be answered by considering information on their characteristics and experiences. The difficulties with the counterfactual definition are identifying the characteristics of the closest possible world in which the putative cause does not occur and finding an empirical surrogate for this world.
For the butterfly ballot, sheer luck led a team of researchers to discover that the absentee ballot did not have the problematic features of the butterfly ballot. If in those cases where the cause C occurs, the effect E occurs, then the first requirement of the counterfactual definition is met: When C occurs, then E occurs. Now, if the situations which receive the control are not different in any significant ways from those that get the treatment, then they can be considered surrogates for the closest possible world in which the cause does not occur.
If in these situations where the cause C does not occur, the effect E does not occur either, then the second requirement of the counterfactual definition is confirmed: In the closest possible world where C does not occur, then E does not occur. The crucial part of this argument is that the control situation, in which the cause does not occur, must be a good surrogate for the closest possible world to the treatment.
Two experimental methods have been devised for ensuring closeness between the treatment and control situations. One is classical experimentation in which as many circumstances as possible are physically controlled so that the only significant difference between the treatment and the control is the cause. In a chemical experiment, for example, one beaker holds two chemicals and a substance that might be a catalyst and another beaker of the same type, in the same location, at the same temperature, and so forth contains just the two chemicals in the same proportions without the suspected catalyst.
If the reaction occurs only in the first beaker, it is attributed to the catalyst. The second method is random assignment of treatments to situations so that there are no reasons to suspect that the entities that get the treatment are any different, on average, from those that do not.
We discuss this approach in detail below. Although the counterfactual definition of causation leads to substantial insights about causation, it also leads to two significant problems. Using the counterfactual definition as it has been described so far, the direction of causation cannot be established, and two effects of a common cause can be mistaken for cause and effect. Consider, for example, an experiment as described above. In that case, in the treatment group, when C occurs, E occurs, and when E occurs, C occurs.
Similarly, in the control group, when C does not occur, then E does not occur, and when E does not occur, then C does not occur. In fact, there is perfect observational symmetry between cause and effect which means that the counterfactual definition of causation as described so far implies that C causes E and that E causes C. The same problem arises with two effects of a common cause because of the perfect symmetry in the situation.
Consider, for example, a rise in the mercury in a barometer and thunderstorms. Each is an effect p. These problems bedevil Humean and counterfactual approaches. If we accept these approaches in their simplest forms, we must live with a seriously incomplete theory of causation that cannot distinguish causes from effects and that cannot distinguish two effects of a common cause from real cause and effect. That is, although the counterfactual approach can tell whether two factors A and B are causally connected 40 in some way, it cannot tell whether A causes B , B causes A , or A and B are the effects of a common cause sometimes called spurious correlation.
The reason for this is that the truth of the two counterfactual conditions described so far amounts to a particular pattern of the cross-tabulation of the two factors A and B. In the simplest case where the columns are the absence or presence of the first factor A and the rows are the absence or the presence of the second factor B , then the same diagonal pattern is observed for situations where A causes B or B causes A , or for A and B being the effects of a common cause.
In all three cases, we either observe the presence of both factors or their absence. It is impossible from this kind of symmetrical information, which amounts to correlational data, to detect causal asymmetry or spurious correlation. The counterfactual approach as elucidated so far, like the Humean regularity approach, only describes a necessary condition, the existence of a causal connection between A and B , for us to say that A causes B.
Requiring temporal precedence can solve the problem of causal direction by simply choosing the phenomenon that occurs first as the cause, but it cannot solve the problem of common cause because it would lead to the ridiculous conclusion that since the mercury rises in barometers before storms, this upward movement in the mercury must cause thunderstorms.
For this and other reasons, David Lewis rejects using temporal precedence to determine the direction of causality. This condition amounts to finding situations in which C occurs but E does not—typically because there is some other condition that must occur for C to produce E. Rather than explore this strategy, we describe a much better way of establishing causal priority in the next section. In an experiment, there is a readily available piece of information that we have overlooked so far because it is not mentioned in the counterfactual approach.
The factor that has been manipulated can determine the direction of causality and help to rule out spurious correlation. The manipulated factor must be the cause. Although philosophers are uncomfortable with manipulation and agency approaches to causality because they put people as the manipulators at the center of our understanding of causality, there can be little doubt about the power of manipulation for determining causality. Agency and manipulation approaches to causation Gasking ; von Wright ; Menzies and Price elevate this insight into their definition of causation.
Causation exists when there is a recipe that regularly produces effects from causes. Perhaps our ontological definitions of causality should not employ the concept of agency because most of the causes and effects in the universe go their merry way without human intervention, and even our epistemological methods often discover causes, as with Newtonian mechanics or astrophysics, where human manipulation is impossible. Yet our epistemological methods cannot do without agency because human manipulation appears to be the best way to identify causes, and many researchers and methodologists have fastened upon experimental interventions as the way to pin down causation.
These authors typically eschew ontological aims and emphasize epistemological goals. When full experimental control is not possible, Thomas Cook and Donald T. This account of causality is especially compelling if the manipulation approach and the counterfactual approach are conflated, as they often are, and viewed as one approach. Philosophers seldom combine them into one perspective, but all the methodological writers cited above Simon, Cook and Campbell, Mill, Sobel, and Cox conflate them because they draw upon controlled experiments, which combine intervention and control, for their understanding of causality.
Through interventions, experiments manipulate one or more factor which simplifies the job of establishing causal priority by appeal to the manipulation approach to causation. Through laboratory controls or statistical randomization, experiments also create closest possible worlds that simplify the job of eliminating confounding explanations by appeal to the counterfactual approach to causation.
The combination of intervention and control in experiments makes them especially effective ways to identify causal relationships. If experiments only furnished closest possible worlds, then the direction of causation would be indeterminate without additional information. If experiments only manipulated factors, then accidental correlation would be a serious threat to valid inferences about causality. Both features of experiments do substantial work.
Any approach to determining causation in nonexperimental contexts that tries to achieve the same success as experiments must recognize both these features. The methodologists cited above conflate them, and the psychological literature on counterfactual thinking cited at the beginning of this chapter shows that our natural inclination as human beings is to conflate them. When considering alternative possibilities, people typically consider nearby worlds in which individual agency figures prominently.
When asked to consider what could have happened differently in a vignette involving a drunken driver and a new route home from work, subjects focus on having taken the new route home instead of on the factors that led to drunken driving. They choose a cause and a closest possible world in which their agency matters. But there is no reason why the counterfactual approach and the manipulation p. The counterfactual approach to causation emphasizes possible worlds without considering human agency and the manipulation approach to causation emphasizes human agency without saying anything about possible worlds.
Experiments derive their strength from combining both theoretical perspectives, but it is all too easy to overlook one of these two elements in generalizing from experimental to observational studies. As we shall see in a later section, the best-known statistical theory of causality emphasizes the counterfactual aspects of experiments without giving equal attention to their manipulative aspects. Consequently, when the requirements for causal inference are transferred from the experimental setting to the observational setting, those features of experiments that rest upon manipulation tend to get underplayed.
Yet, in addition to the practical problems of implementing the recipe correctly, the experimental approach does not deal well with two related problems. It does not solve the problem of causal pre-emption which occurs when one cause acts just before and pre-empts another, and it does not so much explain the causes of events as it demonstrates the effects of manipulated causes. The problem of pre-emption illustrates this point. The following example of preemption is often mentioned in the philosophical literature.
A man takes a trek across a desert. His enemy puts a hole in his water can. Another enemy, not knowing the action of the first, puts poison in his water. Manipulations have certainly occurred, and the man dies on the trip. The enemy who punctured the water can thinks that she p. In fact, the water dripping out of the can pre-empted the poisoning so that the poisoner is wrong.
The pre-emption problem is a serious one, and it can lead to mistakes even in welldesigned experiments. Presumably the closest possible world to the one in which the water can has been punctured is one in which the poison has been put in the water can as well. Therefore, even a carefully designed experiment will conclude that the puncturing of the can did not kill the man crossing the desert because the unfortunate subject in the control condition would die from poisoning just as the subject in the treatment would die from the hole in the water can.
The experiment alone would not tell us how the man died. A similar problem could arise in medical experiments. If the experiment simply looked at the mortality rates of the patients, it would conclude that arsenic had no medicinal value because the same number of people died in the two conditions. In both these instances, the experimental method focuses on the effects of causes and not on explaining effects by adducing causes. The method concludes that the hole had no effect. Instead of asking what caused the death of the patients with venereal disease, the experimental method asks whether giving arsenic to those with venereal disease had any net impact on mortality rates.
It concludes that it did not. In short, experimental methods do not try to explain events in the world so much as they try to show what would happen if some cause were manipulated. This does not mean that experimental methods are not useful for explaining what happens in the world, but it does mean that they sometimes miss the mark. The pre-emption problem is a vivid example of a more general problem with the Humean account that requires a solution. Even if we know that holes in water cans generally spell trouble for desert travelers, we still have the problem of linking a particular hole in a water can with a particular death of a traveler.
Douglas Ehring notes that:. Ehring , The solution in both these cases seems obvious, but it does not follow from the neo-Humean, counterfactual, or manipulation definitions of causality. The solution is to inquire more deeply into what is happening in each situation in order to describe the capacities and mechanisms that are operating.
An autopsy of the desert traveler would show that the person died of thirst, and an examination of the water can would show that the water would have run out before the poisoned water could be imbibed. An autopsy of those given arsenic would show that the signs of venereal disease were arrested while other medical problems, associated with arsenic poisoning, were present.
Further work might even show that lower doses of arsenic cure the disease without causing death. In both these cases, deeper inquires into the mechanism by which the causes and effects are linked would produce better causal stories. But what does it mean to explicate mechanisms and capacities? But there are many mechanisms in the social sciences as well including markets with their methods of transmitting price information and bringing buyers and sellers together, electoral systems with their routines for bringing candidates and voters together in a collective decision-making process, the diffusion of innovation through p.
Hedstrom and Swedberg As these examples demonstrate, mechanisms are not exclusively mechanical, and their activating principles can range from physical and chemical processes to psychological and social processes. Mechanisms provide another way to think about causation. These mechanisms, in turn, are explained by causal laws, but there is nothing circular in this because these causal laws refer to how the parts of the mechanism are connected. The operation of these parts, in turn, can be explained by lower-level mechanisms.
Consider explaining social phenomena by examining their mechanisms. These entities face a particular electoral rule single-district plurality voting which causes two activities. One is that voters often vote strategically by choosing a candidate other than their most liked because they want to avoid throwing their vote away on a candidate who has no chance of winning and because they want to forestall the election of their least wanted alternative. The other activity is that political parties often decide not to run candidates when there are already two parties in a district because they anticipate that voters will spurn their third party effort.
Thunderstorms are not merely the result of cold fronts hitting warm air or being located near mountains; they are the results of parcels of air rising and falling in the atmosphere subject to thermodynamic processes which cause warm humid air to rise, to cool, and to produce condensed water vapor. Among other things, this mechanism helps to explain why thunderstorms are more frequent in areas, such as Denver, Colorado, near mountains because the mountains cause these processes to occur—without the need for a cold air front.
When the temperature increases, the molecules move faster and exert more force on the container walls. Mechanisms like these are midway between general laws on the one hand and specific descriptions on the other hand, and activities can be thought of as causes which are not related to lawlike generalities. Mechanisms, therefore, provide a way to solve the pairing problem, and they leave a multitude of traces that can be uncovered if a hypothesized causal relation really exists. Earlier in this chapter, the need to rule out common causes and to determine the direction of causation in the counterfactual approach led us towards a consideration of multiple causes.
In this section, the need to solve the problem of pre-emption and the pairing problem led to a consideration of mechanisms. Together, these approaches lead us to consider multiple causes and the mechanisms that tie these causes together. Many different authors have come to a similar conclusion about the need to identify mechanisms Cox ; Simon and Iwasaki ; Freedman ; Goldthorpe , and this approach seems commonplace in epidemiology Hill where debates over smoking and lung cancer or sexual behavior and AIDS have been resolved by the identification of biological mechanisms that link the behaviors with the diseases.
We are now at the end of our review of four causal approaches. We have described two fundamental features of causality. One is the symmetric association between causes and effects. The other is the asymmetric fact that causes produce effects, but not the reverse. Regularity and counterfactual approaches do better at capturing the symmetric aspect of causation than its asymmetric aspect. The regularity approach relies upon the constant conjunction of events and temporal precedence to identify causes and effects. The counterfactual approach suggests searching for surrogates for the closest possible worlds where the putative cause does not occur to see how they differ from the situation where the cause did occur.
This strategy leads naturally to experimental methods where the likelihood of the independence of assignment and outcome, which ensures one kind of closeness, can be increased by rigid control of conditions or by randomly assigning treatments to cases. None of these methods is foolproof because none solves the pairing problem or gets at the connections between events, but experimental methods typically offer the best chance of achieving closest possible worlds for comparisons.
Causal approaches that emphasize mechanisms and capacities provide guidance on how to solve the pairing problem and how to get at the connections between events. These observations can be thought of as elucidations and tests of possible mechanisms. The other major feature of causality, the asymmetry of causes and effects, is captured by temporal priority, manipulated events, and the independence of causes.
Each notion takes a somewhat different approach to distinguishing causes from effects once the unconditional association of two events or sets of events has been established. Temporal priority simply identifies causes with the events that came first. If growth in the money supply reliably precedes economic growth, then the growth in the money supply is responsible for growth. The manipulation approach identifies the manipulated event as the causally prior one. If a social experiment manipulates work requirements and finds that greater stringency is associated with faster transitions off welfare, then the work requirements are presumed to cause these transitions.
Finally, one event is considered the cause of another if a third event can be found that satisfies the INUS conditions for a cause and that varies independently of the putative cause. If short circuits vary independently of wooden frame buildings, and both satisfy INUS conditions for burned-down buildings, then both must be causes of those conflagrations.
Or if education levels of voters vary independently of their getting the butterfly ballot, and both satisfy INUS conditions for mistakenly voting for Buchanan instead of Gore, then both must be causes of those mistaken votes.
Now that we know what causation is, what lessons can we draw for doing empirical research? Regularity and mechanism approaches tend to ask about the causes of effects while counterfactual and manipulation approaches ask about the effects of imagined or manipulated causes. The regularity approach is at home with observational data, and the mechanism approach thrives on analytical models and case studies. Which method, however, is the best method? Clearly the gold standard for establishing causality is experimental research, but even that is not without flaws.
When they are feasible, well-done experiments can help us construct closest possible worlds and explore counterfactual conditions. But we still have to assume that there is no p. If, for example, we are studying the impact of a skill training program on the tendency for welfare recipients to get jobs, we should be aware that a very strong economy might pre-empt the program itself and cause those in both the p.
As a result, we might conclude that skills do not count for much in getting jobs even though they might matter a lot in a less robust economy. Or if we are studying electoral systems in a set of countries with a strong bimodal distribution of voters, we should know that the voter distribution might pre-empt any impact of the electoral system by fostering two strong parties. Consequently, we might conclude that single-member plurality systems and proportional representation systems both led to two parties, even though this is not generally true. What is the exact causal statement of how C causes E?
What is the corresponding counterfactual statement about what happens when C does not occur? What is the causal field? What is the context or universe of cases in which the cause operates? Is there a constant conjunction i. Is there a constant conjunction after controls for these other causes are introduced? Can you describe a closest possible most similar world to where C causes E but C does not occur? How close are these worlds?
Can you actually observe any cases of this world or something close to it, at least on average? Again, how close are these worlds? In this closest possible world, does E occur in the absence of C? Are there cases where E occurs but C does not occur? What factor intervenes and what does this tell us about C causing E? What does it mean to manipulate your cause? How would you describe the cause? Do you have any cases where C was actually manipulated? What was the effect? Is this manipulation independent of other factors that influence E?
Can you explain, at a lower level, the mechanism s by which C causes E? Can you identify some capacity that explains the way the cause leads to the effect? Can you observe this capacity when it is present, and measure it? If we add an investigation of mechanisms to our experiments, we might be able to develop safeguards against these problems.
For the welfare recipients, we could find out more about their job search efforts, for the party systems we could find out about their relationship to the distribution of voters, and for the teachers we could find out about their adoption of new teaching methods. Once we go to observational studies, matters get much more complicated. Spurious correlation is a real danger.
There is no way to know whether those cases which get the treatment and those which do not differ from one another in other ways. It is very hard to be confident that the requirements for an experiment hold which are outlined in the next section and in Campbell and Stonley and Cook and Campbell Because nothing has been manipulated, there is no surefire way to determine the direction of causation. Temporal precedence provides some information about causal direction, but it is often hard to obtain and interpret it.
Among statisticians, the best-known theory of causality developed out of the experimental tradition. The roots of this perspective are in Fisher and especially Neyman [] , and it has been most fully articulated by Rubin ; and Holland In this section, which is more technical than the rest of this chapter, we explain this perspective, and we evaluate it in terms of the four approaches to causality.
There are four aspects of the Neyman—Rubin—Holland NRH approach which can be thought of as developing a recipe for solving the causal inference problem by comparing similar possible worlds, if certain assumptions hold. A Counterfactual Definition of Causal Effect—Causal relationships are defined using a counterfactual perspective which focuses on estimating causal effects.
This definition alone provides no guidance on how researchers can actually identify causes because it relies upon an unobservable counterfactual. To the extent that the NRH approach considers causal priority, it equates it with temporal priority. An Assumption for Creating Comparable Mini-possible Worlds—Noninterference of Units SUTVA —Even if we could observe the outcome for some unit a person or a country of both the world with the cause present and without the cause, it is possible that the causal effect would depend upon whether other units received the treatment or did not receive the treatment.
For example, the impact of a training program on a child in a family might be different when the child and her sibling received the treatment than when the child alone received the treatment. The NRH counterfactual possible worlds approach assumes that this kind of interference does not occur by making the Stable Unit Treatment Value Assumption SUTVA that treats cases as separate, isolated, closest possible worlds which do not interfere or communicate with one another. The Independence of Assignment and Outcome—The counterfactual possible worlds approach not only assumes that units do not interfere with one another, it also assumes that a world identical to our own, except for the existence of the putative cause, can be imagined.
The NHR approach goes on to formulate a set of epistemological assumptions, namely the independence of the assignment of treatment and the outcome or the mean conditional independence of assignment and outcome, that make it possible to be sure that two sets of cases, treatments and controls, only differ on average in whether or not they got the treatment. The definition of a causal effect based upon unobserved counterfactuals was first described in a paper published in Polish by Jerzy Neyman Rubin ; ; and Heckman were the first to stress the importance of independence of assignment and outcome.
A number of experimentalists p. Random assignment as a method for estimating causal effects was first championed by R. Fisher in and Holland provides the best synthesis of the entire perspective. The counterfactual definition of causality rests on the notion of comparing a world with the treatment to a world without it. The fundamental problem of counterfactual definitions of causation is the tension between finding a suitable definition of causation that controls for confounding effects and finding a suitable way of detecting causation given the impossibility of getting perfect counterfactual worlds.
As we shall show, the problem is one of relating a theoretical definition of causality to an empirical one. In this case, we can define causal impact as follows: Possible worlds, outcomes, and causal effects from manipulation Z for one unit A. E A is not 0 , then the treatment has a net effect. Then, based on the counterfactual approach of David Lewis, there is a causal connection between the treatment and the outcome if two conditions hold. First, the treatment must be associated with a net effect, and second the absence of the treatment must be associated with no net effect.
Although the satisfaction of these two conditions is enough to demonstrate a causal connection, it is not enough to determine the direction of causation or to rule out a common cause. If the two conditions for a causal connection hold, then the third Lewis condition, which establishes the direction of causation and which rules out common cause, cannot be verified or rejected with the available information.
The third Lewis condition requires determining whether or not the cause occurs in the closest possible world in which the net effect does not occur. But the only observed world in which the net effect does not occur in the NRH setup is the control p. There is no way to test the third Lewis condition and to show that the treatment causes the net effect.
Alternatively, the direction of causation can be determined although common cause cannot be ruled out if the treatment is manipulated to produce the effect. The idea that an effect might precede a cause in time is regarded as meaningless in the model, and apparently also by Hume. As with the Lewis counterfactual approach, the difficulty with the NRH definition of causal connections is that there is no way to observe both Y A 1 and Y A 0 for any particular case. The typical response to this problem is to find two units A and B which are as similar as possible and to consider various possible allocations of the control and the treatment to the two units.
We shall say more about how to ensure this similarity later; for the moment, simply assume that it can be accomplished. But, as we shall see, this leads to fundamental problems regarding the definition of causality. Possible worlds, outcomes, and causal effects from manipulations Z for two units A and B. In the first column, both A and B are given the control. In the second column, A gets the control and B gets the treatment. In the third column, A gets the treatment and B gets the control, and in the fourth column, both units get the treatment.
For each unit, there are then four possible outcome quantities. For each unit, there are six possible ways to take these four possible outcome quantities two at a time to define a difference that could be considered the causal impact of Z A , but not all of them make sense as a definition of the causal impact of Z A.
The six possibilities are listed in Table Six possible definitions of causal impact on unit A. Neither of these differences makes much sense as a definition of the causal impact of Z A. In the first difference, for example, we are comparing the outcome for A in the world in which A gets the treatment and B does not with the world in which A does not get the treatment and B gets it. Suppose, for example, that A and B are siblings, adjacent plots of land, two students in the same class, two people getting a welfare program in the same neighborhood, two nearby countries, or even two countries united by common language and traditions.
Then for treatments as diverse as new teaching methods, propaganda, farming techniques, new scientific or medical procedures, new ideas, or new forms of government it might matter for the A member of the pair what happens to the B member because of causal links between them. For example, if a sibling B is given a special educational program designed to increase achievement, it seems possible that some of this impact will be communicated to the other sibling A , even when A does not get the treatment directly.
Or if a new religion or religious doctrine is introduced into one country, it seems possible that it will have an impact on the other country. In both cases, it seems foolish to try to compare the impact of different manipulations of A when different things have also been done to B , unless we can be sure that a manipulation of B has no impact on A or unless we define the manipulation of B as part of the manipulation of A. This second possibility deserves some comment. If the manipulation of B is part of the manipulation of A , then we really have not introduced a new unit when we decided to consider B as well as A.
There are two lessons to be learned from this discussion. First, it is not as easy as it might seem to define isolated units, and the definition of separate units partly depends upon how they will be affected by the manipulation. This leaves us with the following pairs which are plausible definitions of the causal effect for each unit, depending upon what happens to the other unit. These pairs are p. For example, for A: Theoretical definitions summarized for units A and B. Consider the definitions for A in 2.
Causation and Explanation in Social Science
Both definitions seem sensible because each one takes the difference between the outcome when A is treated and the outcome when A is not treated, but they differ on what happens to B. In the first case, B is given the control manipulation and in the second case, B is given the treatment manipulation. From the preceding discussion, it should be clear that these might lead to different sizes of effects.
The impact of a pesticide on a plot A , for example, might vary dramatically depending upon whether or not the adjacent plot B got the pesticide. The effect of a propaganda campaign might vary dramatically depending upon whether or not a sibling got the propaganda message. The impact on A of a treatment might depend upon what happens to B. But how could that be done? Neither can be measured directly because each requires that the unit A both get and not get the treatment, which is clearly impossible.
In terms of our notation, the problem is that each difference above involves different values for Z A and Z B. Both states of the world cannot occur. Observationally feasible definitions of causality. With two units and a dichotomous treatment, four states of the world are possible: These are listed in Table The four differences of these two quantities are listed in Table Each difference is a candidate to be considered as a measure of causal impact.
The differences for the first and second of these four states of the world do not offer much opportunity for detecting the causal impact of Z because there is no variability in the treatment between the two units. Note that we denote this empirical definition of causality by an asterisk.
This difference is computable, but does it represent a causal impact? Intuitively, the problem with using it as an estimate of causal impact is that A and B might be quite different to begin with. Suppose we are trying to estimate the impact of a new teaching method.
Person A might be an underachiever while person B might be an overachiever. Hence, even if the method works, person A might score lower on a test after treatment than person B , and the method will be deemed a failure. Or suppose we are trying to determine p. County A might be very competent at running elections while county B might not be. Consequently, even if the machine works badly, county A with the new system might perform better than county B without it—once again leading to the wrong inference.
One of the problems is that preexisting differences between the units can confound causal inference. Surveying the four definitions of causal impact in equations 2 and 3 above, this definition seems most closely related to two of them: Thus we require that: We shall make the transformation of Y B 1, 0 into Y A 0, 0 in two steps which are depicted on Table This assumption is commonly made in laboratory work where identical specimens are tested or where the impacts of different manipulations are studied for the identical setup.
It obviously requires a great deal of knowledge about what makes things identical to one another and an ability to control these factors. It is typically not a very good assumption in the social sciences. We discarded this definition because, for example, the impact Y A 0, 1 of the treatment on Amy when Beatrice gets the treatment might be substantial—perhaps p.
But to get to that definition, we must suppose that: Linking observational data to theoretical definitions of causality through unit identicality and noninterference of units. In effect, this requires that we believe that the causal impact of manipulation Z A on A is not affected by whether or not B gets the treatment.
- City Trenches?
- The History of Creation or The Development of the Earth and Its Inhabitants by the Action of Natural Causes Volumes 1 and 2;
- 30 Days of Amazing Paleolithic Dinners: Easy Gluten Free Recipes (Paleo Recipes Made Easy).
- Petri K Ylikoski - University of Helsinki Research Portal - University of Helsinki;
- Handbook of Essential Oils: Science, Technology, and Applications.
- The Girl in the Lighthouse/All That is Beautiful (Arrington Saga Book 4)?
- Les Brésiliens et la vie au Brésil (French Edition).
As we have already seen, this is a worrisome assumption, and we shall have a great deal to say about it later. In addition we need to assume that the causal impact of manipulation Z A on B is not affected by whether or not A gets the treatment: To summarize, to get a workable operational definition of causality, we need to assume that one of the following holds true: The first equality in each line holds true if we assume identicality and the second holds true if we assume noninterference SUTVA.
Surveying the four theoretical definitions of causal impact in equations 2 and 3 above, this definition seems most closely related to these two: To make these definitions work, we require, analogously to 11 above, that: It is clear that the assumptions of noninterference SUTVA and identicality are sufficient to define causality unambiguously, but are they necessary?
They are very strong assumptions. Can we do without one or the other? Then we get the comforting result that the two theoretical definitions of causal impact for A in 2 above and the two for B in 3 above are identical:. Since these equations hold, we denote the common causal effects as simply E A and E B:. Can we get around identicality? Consider the following maneuver. Linking observational data to theoretical definitions of causality through noninterference of units and average causal effect.
Take first and third on left: Take second and fourth on left: This argument is depicted in Table As a result, we can write:. Unfortunately, we cannot observe ACE, and we do not want to assume identicality. We can, however, do the following. Randomization in this way ensures that the treatment is assigned at random. The virtue of this estimate is that it is a statistically unbiased estimate of the average impact of Z A on A and Z B on B. That is, in repeated trials of this experiment assuming that repeated trials make sense , the expected value of ACE will be equal to the true causal effect.
Henry E. Brady
But the measure has two defects. First, it may be problematic to consider the average impact of Z A on A and Z B on B if they are not similar kinds of things. Once we drop identicality, it is quite possible that A and B could be quite different kinds of entities, say a sick person A and a well person B. Then one would be randomly chosen to get some medicine, and the subsequent health Y of each person would be recorded. If the sick person A got the medicine then the causal effect E A would be the difference between the health Y A 1, 0 of the sick person after taking the medicine and the health of the well person Y B 1, 0.
If the well person B got the medicine, then the causal effect E B would be the difference between the health Y B 0, 1 of the well person after taking the medicine and the health of the sick person Y A 0, 1. If the medicine works all the time and makes people well, then E A will be zero giving the medicine to the sick person will make him like the well person and E B will be positive giving the medicine to the well person will not change her but not giving it to the sick person will leave him still sick —hence the average effect will be to say that the medicine works, half the time. In fact, the medicine works all the time—when the person is sick.
More generally, and somewhat ridiculously, A could be a person and B could be a tree, a dog, or anything. Thus, we need some assumption like the identicality of the units in order for our estimates of causal effect to make any sense. One possibility is that they are randomly chosen from some well-defined population to whom the treatment might be applied in the future. The second defect of the measure is that it is only correct in repeated trials. In the medical experiment described above, if the well person is randomly assigned the medicine, then the experiment will conclude that the medicine does not work.
The usual response to this problem is to multiply the number of units so that the random p. This strategy certainly can make it possible to make statistical statements about the likelihood that an observed difference between the treatment and control groups is due to chance or to some underlying true difference. But it relies heavily upon multiplying the number of units, and it seems that multiplying the number of units brings some risks with it.
We started this section with a very simple problem in what is called singular causation. Equation 1 provided a very simple definition of what we meant by the causal effect. This simple definition foundered because we cannot observe both Y A 1 and Y A 0. To solve this problem, we multiplied the number of units. Multiplying the number of units makes it possible to obtain an observable estimate of causal effect by either making the noninterference and identicality assumptions or by making the noninterference assumption and using randomization to achieve random assignment.
But these assumptions lead us into the difficulties of defining a population of similar things from which the units are chosen and the problem of believing the noninterference assumption. These problems are related because they suggest that ultimately researchers must rely upon some prior knowledge and information in order to be sure that units or cases can be compared. But how much knowledge is needed? Are these assumptions really problematic? Should we, for example, be worried about units affecting one another? Suppose people in a treatment condition are punished for poor behavior while those in a control condition are not.
In the Cal-Learn experiment in California, for example, teenage girls on welfare in the treatment group had their welfare check reduced if they failed to get passing grades in school. Those in the randomly selected control group were not subject to reductions but many thought they were in the treatment group probably because they knew people who were in the treatment group and they appear to have worked to get passing grades to avoid cuts in welfare Mauldon et al.
The problem here is that there is interaction between the units. Researchers using human subjects have worried about the possibility of interference. Cook and Campbell , mention four fundamental threats to randomized experiments. Compensatory rivalry occurs when control units decide that even though they are not getting the treatment, they can do as well as those getting it. Resentful demoralization occurs when those not getting the treatment become demoralized because they are not getting the treatment. Compensatory equalization occurs when those in charge of control units decide to compensate for the perceived inequities between treatment and control units, and treatment diffusion occurs when those in charge of control units mimic the treatment because of its supposed beneficial effects.
SUTVA implies that each supposedly identical treatment really is identical and that each unit is a separate, isolated possible world that is unaffected by what happens to the other units. SUTVA is the master assumption that makes controlled or randomized experiments a suitable solution to the problem of making causal inferences.
SUTVA ensures that treatment and control units really do represent the closest possible worlds to one another except for the difference in treatment. In order to believe that SUTVA holds, we must have a very clear picture of the units, treatments, and outcomes in the situation at hand so that we can convince ourselves that experimental or observational comparisons really do involve similar worlds. What kind of treatment, for example, would be required for females to be males? Are individuals or the firm the basic unit of analysis?
From what pool would these men be chosen? If men were randomly assigned to some jobs formerly held by women, would there be interactions across units that would violate SUTVA? Not surprisingly, if the SUTVA assumption fails, then it will be at best hard to generalize the results of an experiment and at worst impossible to even interpret its results. Generalization is hard if, for example, imposing a policy of welfare time-limits on a small group of welfare recipients has a much different impact than imposing it upon every recipient.
Perhaps the imposition of limits on the larger group generates a negative attitude toward welfare that encourages job seeking which is not generated p.
In both cases, the pattern of assignment to treatments seems to matter as much as the treatments themselves because of interactions among the units, and the interpretation of these experiments might be impossible because of the complex interactions among units. If SUTVA does not hold, then there are no ways such as randomization to construct closest possible worlds, and the difficulty of determining closest possible worlds must be faced directly. If SUTVA holds and if there is independence of assignment and outcome through randomization, then the degree of causal connection can be estimated.
Much of the art in experimentation goes into strategies that will increase the likelihood that they do hold. Cases can be isolated from one another to minimize interference, treatments can be made as uniform as possible, and the characteristics and circumstances of each case can be made as uniform as possible, but nothing can absolutely ensure that SUTVA and the independence of assignment and outcome hold.
If noninterference across units SUTVA holds and if independence of assignment and outcome hold, then mini-closest-possible worlds have been created which can be used to compare the effects in a treatment and control condition. The mathematical conditions required for the third method to work follow easily from the Neyman—Holland—Rubin setup, but there is no method for identifying the proper covariates.
And outside of experimental studies, there is no way to be sure that conditional independence of assignment and outcome holds. Even if we know about something that may confound our results, we may not know about all things, and without knowing all of them, we cannot be sure that correcting for some of them p.