Giacomo Molinari's PhD thesis "Deference Principles for Imprecise Probabilities"
Posted on December 19, 2024 by Giacomo Molinari (edited by Arthur Van Camp)[ go back to blog ]
On October 21st, 2024, I defended my PhD thesis: “Deference principles for Imprecise Probabilities”. I worked on this thesis over the last four years, under the supervision of Jason Konek and Catrin Campbell–Moore. My research was part of the ERC-funded project “Epistemic Utility for Imprecise Probability” hosted by the Department of Philosophy at the University of Bristol. This meant I had the fortune to work alongside Arthur Van Camp and Kevin Blackwell, who were postdocs on the project. I also benefited greatly from a research visit at MIT, hosted by Kevin Dorst. My PhD examiners were Jim Joyce and Richard Pettigrew.
Introduction
Some notion of deference is central to many philosophical projects. Here are some examples:
- Characterise learning People’s opinions can change in many ways. But only some of these changes count as genuine learning. For example, suppose I offer you a pill that will make you forget everything that happened over the last 24 hours. Taking the pill would change your opinions, but we would not count this as learning. One way to characterise genuine learning is in terms of deference: for some experience to count as a genuine learning, your pre-experience self should defer to the opinions of your post-experience self (Skyrms, 1990; Huttegger, 2014).
- Characterise external expertise To consider someone to be an expert just means deferring to their opinions. I consider my doctor to be an expert because I defer to her opinions about which medicines I should take, and I consider the IPCC to be experts because I defer to their opinions about how average global temperatures will change over the next 100 years.
- Norms of Rationality Philosophers often appeal to some notion deference when expressing norms of rationality. For example, they argue that you should defer to the objective chances (Lewis, 1980), to your future self’s opinions (Van Fraassen, 1984; Briggs 2009), and to the opinions best supported by your evidence (Elga, 2013; Dorst, 2020).
Orthodox (i.e. precise) Bayesians provide various deference principles: formal characterisations of deference appropriate to these different contexts. A popular example is the Reflection Principle. It says that I defer to you about some event \(E\) if and only if my conditional probability for \(E\), conditional on you assigning probability \(x\) to \(E\), is exactly \(x\). For example, I defer to the objective chances about the outcome of a coin toss, because conditional on the chance of heads being \(x\), I assign probability \(x\) to heads.
Friends of imprecise probabilities deny that (rational) opinions can always be represented by a single probability function. But what does it mean for an agent with imprecise probabilistic opinions to defer to another? My thesis aims to answer this question.
Contributions
There are several ways to characterise deference for imprecise probabilities. To each precise deference principle correspond different imprecise counterparts, i.e. different plausible ways to extend the principle to the imprecise case. This is not surprising: something analogous happens with independence, where to a single notion in the precise case (probabilistic independence) correspond importantly different notions in the imprecise case. Some of these imprecise deference principles have been briefly mentioned in the philosophical literature, but many have not. A good portion of my thesis is devoted to formulating these different principles and mapping out how they relate to one another. I hope that this will make further study of these principles somewhat easier.
After introducing these imprecise deference principles, I argue that they really capture the notions of deference we are interested in. When we say that we defer to someone, we often mean that we value their opinions more than our own, in the sense that we think they lead to better decisions.
It’s fairly easy to formalise this in the precise case: I value my doctor’s opinions because, when choosing between different medicines, I expect (a) the option that maximises expected utility according to my doctor’s probability function, to be better than (b) the option that maximizes expected utility according to my own probability function. It can be shown that, under some further assumptions, you value someone’s opinions in this way if and only if you obey the Reflection Principle with regards to the probability function representing their opinions (Huttegger, 2014).
The imprecise case is trickier. Each probability function in my doctor’s credal set may recommend a different option in a given decision problem. The same is true for my own credal set. So how can I say whether it’s better to choose according to my doctor’s opinions rather than my own? Both of our choices may be radically underdetermined by our respective opinions, so it’s hard to compare them. Still, by appropriately specifying this kind of decision-theoretic value in the imprecise case, and appealing to some further assumptions, I show that you value someone’s opinions in this way if and only if you obey a certain imprecise deference principle with regards to the credal set representing their opinions.
My thesis also touches on the relationship between higher-order and imprecise probability. Higher-order probabilities are simply probabilities which encode uncertainty about the value of some probability function. For example, I may be uncertain about what my own beliefs are. If that’s the case, then the probability function representing my beliefs will be uncertain about its own values. More precisely: “my beliefs” is a random variable \(Z\) whose values are probability functions (i.e. real-valued vectors) over the possibility space. If \(\omega\) is the case, then my actual probability function is the vector \(Z(\omega)\), which encodes my uncertainty about the value of \(Z\).
Higher-order probabilities are relevant to my thesis for two reasons. Firstly, many deference principles (both precise and imprecise) are incompatible with higher-order probabilities. Thus, if we are interested in higher-order probabilities, we need to tweak these principles.
Secondly, some philosophers have argued that higher-order probabilities can do the work of imprecise probabilities. I think they are wrong, but also that this argument is worth taking seriously. I remember having the same instinct when I first learned about imprecise probabilities, at the start of my PhD. I used to wonder whether having an “imprecise opinion” did not just mean being uncertain about which (precise) opinions I have, or maybe about which (precise) opinions my evidence supports.
In my thesis I argue that higher-order and imprecise probabilities capture different, orthogonal features of our opinions. I analyse the sorts of arguments put forwards by those who propose substituting imprecision with higher-order uncertainty, and I point out where they go wrong. I hope that addressing these arguments will prevent both newcomers and critics of IP from conflating imprecision with higher-order uncertainty.
The thesis ends by showing how we can use the imprecise deference principles discussed above to give a theory of peer disagreement. Peer disagreement cases involve a disagreement between you and an epistemic peer: someone who has the same evidence and as you, and whom you consider to be just as rational as you.
There has been a lot of discussion in recent philosophical literature about how one should respond to peer disagreement. One proposal, recently defended by Dorst (2020), is that learning your peer’s opinions gives you evidence about what opinions are best supported by your shared evidence. Since you should defer to the opinions supported by your shared evidence, as specified by some deference principle, your opinions should change accordingly. Thus, we can use deference principles to determine how you should respond to peer disagreement.
In my thesis, I use imprecise deference principles to extend Dorst’s treatment of peer disagreement. The expressive power of imprecise credences allows us to model scenarios where disagreement dilates your opinions. This is quite intuitive: sometimes, learning that your peer disagrees with you suggests that your shared evidence is less informative than you initially thought, in which case your opinions should become more imprecise. Indeed, some philosophers have independently argued that we should sometimes respond to peer disagreement by suspending judgement. Imprecise probabilities allow us to model this kind of response, and to specify the conditions under which it is appropriate. Because of this, I believe they have much to offer to the current philosophical debate around peer disagreement.
References
Briggs, R. (2009). Distorted reflection, Philosophical Review, vol. 118(1), 59–85.
Dorst, K. (2020). Evidence: A guide for the uncertain, Philosophy and Phenomenological Research, vol. 100(3), 586–632.
Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle, Philosophical Studies, vol. 164, 127–139.
Huttegger, S. M. (2014). Learning experiences and the value of knowledge, Philosophical Studies, vol. 171, 279–288.
Lewis, D. (1980). A subjectivist’s guide to objective chance, In IFS: Conditionals, belief, decision, chance and time, 267–297, Dordrecht: Springer Netherlands.
Skyrms, B. (1990). The dynamics of rational deliberation, Harvard University Press.
Van Fraassen, C. (1984). Belief and the Will, The Journal of Philosophy, vol. 81(5), 235–256.