Studies in Causal Reasoning and Learning
University of California Los Angeles United States
Pagination or Media Count:
Building intelligent systems that can learn about and reason with causes and effects is a fundamental task in artificial intelligence. This dissertation addresses various issues in causal reasoning and learning in the framework of causal Bayesian networks. We offer a complete characterization of the set of distributions that could be induced by local interventions on variables governed by a causal Bayesian network. The characterization provides a symbolic inferential tool for tasks in causal reasoning. We propose a new method of discovering causal structures, based on the detection of local, spontaneous changes in the underlying data-generating model. We show that the use of information about local changes increases our power of causal discovery beyond the limits set by independence equivalence that governs Bayesian networks. In the presence of unmeasured variables, causal models may impose non-independence functional constraints andno general criterion is previously available for finding those constraints. We offer a systematic method of identifying functional constraints, which facilitates the task of testing causal models. Causal effects permit us to predict how systems would respond to actions or policy decisions. We establish new graphical criteria for ensuring the identification of causal effects that generalize and simplify existing criteria in the literature, and we provide computational procedures for systematically identifying causal effects. Assessing the probability of causation, that is, the likelihood that one event was the cause of another, guides much of what we understand about and how we act in the world. We show how useful information on the probabilities of causation can be extracted from empirical data, and how data from both experimental and nonexperimental studies can be combined to yield information that neither study alone can provide.