Associate Professor
Department of Philosophy
University of California, Davis
ika[AT]ucdavis.edu
I am a philosopher of science and an epistemologist at UC Davis and serve as an associate editor for the Harvard Data Science Review. My research focuses on the justifications of various forms of scientific inference across diverse contexts and disciplines, including physics, epidemiology, economics, statistics, and machine learning. This requires me not only to develop philosophical ideas but also to prove theorems (using tools from statistical/machine learning theory). Here is my CV. The following is a selection of my papers.
Overview of My Approach
Convergence to the Truth, forthcoming in the Blackwell Companion to Epistemology, 3rd Edition, edited by Kurt Sylvan, Ernest Sosa, Jonathan Dancy, and Matthias Steup.
This short article outlines a general framework for justifying scientific inference—achievabilist convergentism—which I develop and apply in all of the papers below. My contribution belongs to a long tradition of ideas in machine learning theory, frequentist statistics, and the philosophy of C. S. Peirce. That said, I am still a Bayesian on weekends, albeit a frequentist Bayesian.
The Realism Debate and Ockham's Many Razors
Scientific Realism vs. Anti-Realism: Toward a Common Ground, under review.
While scientific realists and anti-realists have their favored versions of Ockham's razor, I argue that their justifications can be grounded in a shared, unifying framework—the one that underlies machine learning theory and frequentist statistics. Case studies include the debate over AIC vs. BIC in statistical model selection from the 1970s to the 1990s and Jean Perrin's experimental evidence for atomism from the early 20th century.
Traditional Epistemology & Bayesian Epistemology
Frequentist Statistics as Internalist Reliabilism, in Integrating Philosophy of Science and Epistemology, Springer, edited by Yafeng Shan.
It is often presupposed in traditional epistemology that, for justification of beliefs and inferences, reliabilism implies externalism. I suggest a counterexample, one that is needed to make better sense of the practice of frequentist statistics. In a follow-up, I extend the point to cover machine learning.
The Problem of the Priors, or Posteriors?, under review.
I argue that Bayesians should address the problem of priors in a "reverse" direction: first, identify norms that directly govern posterior degrees of belief, then let these norms induce (backward!) constraints on prior degrees of belief, through the diachronic principle of conditionalization. This approach is key to establishing a Bayesian foundation for scientific inference in general and for statistics and machine learning in particular.
Causal Inference in Econometrics and Machine Learning
The Logic of Counterfactuals and the Epistemology of Causal Inference, under review.
The 2021 Nobel Prize in Economics recognized a theory of causal inference, very influential in health and social sciences. A longstanding concern is that this theory assumes a form of determinism and a controversial principle about the logic of counterfactuals. I respond by removing this assumption from the theory.
The Hard Problem of Theory Choice: A Case Study on Causal Inference and Its Faithfulness Assumption, in Philosophy of Science (2019).
Despite the widespread skepticism from philosophy and from statistics, I argue that the use of Ockham's razor for learning causal structures, as pursued in machine learning, can be justified under extremely weak assumptions. A key premise of my argument is a theorem proved with Jiji Zhang in the next paper.
On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption, in the Proceedings of Machine Learning Research (2020), joint with Jiji Zhang.
This paper proves a key theorem that helps justify one of the major approaches to learning causal structures in machine learning, the constraint-based methods—even without the Causal Faithfulness Assumption or any of its variants.
Foundations of Statistics and Machine Learning
To Be a Frequentist or Bayesian? Five Positions in a Spectrum, in the Harvard Data Science Review (2024).
There is actually no such thing as the dichotomized distinction between frequentism and Bayesianism in foundations of statistics. By examining the history of statistics, I develop a spectrum of five positions, with the middle way being frequentist Bayes.
Unified Inductive Logic: From Formal Learning to Statistical Inference to Supervised Learning, forthcoming in the Proceedings of the 2024 Asian Workshop on Philosophical Logic, Springer, edited by Katsuhiko Sano and Ryo Hatano.
Much of machine learning can be unified with classical statistics and formal learning theory within a general framework of inductive logic, inspired by C. S. Peirce. This paper outlines the central ideas and proves some supporting theorems. The details will be developed in a follow-up paper.
Enumerative Induction
Modes of Convergence to the Truth: Steps toward a Better Epistemology of Induction, in the Review of Symbolic Logic (2022).
Consider this type of inference: we have observed many ravens, all of them black, and infer that all ravens are black. There has been little discussion on how such an inference might be justified under extremely weak assumptions. Here, I develop a positive account, drawing on ideas from my work on causal inference in machine learning.