Dominik Hose’s PhD thesis on Possibilistic Reasoning with Imprecise Probabilities
Posted on April 11, 2023 by Dominik Hose (edited by Henna Bains)[ go back to blog ]
On May 20th in 2022, I successfully defended my PhD thesis  entitled “Possibilistic Reasoning with Imprecise Probabilities: Statistical Inference and Dynamic Filtering”. This dissertation is the result of five wonderful years at the Institute of Engineering and Computational Mechanics at the University of Stuttgart under the enthusiastic supervision of my “Doktorvater” Michael Hanss. Apart from him, my committee was also composed of Scott Ferson and Ryan Martin—but we will get to that.
My thesis is about one of the simplest theories of imprecise probabilities, possibility theory, and the surprising powers and capabilities that come with it. According to the approach I adopt, this theory revolves around (what I call) an elementary possibility function. Its values may be understood as the upper probabilities of elementary events and the induced possibility measure is just the supremum of this function on the sets/events in question. That simple!
This definition is the entry point of my dissertation, which, in just one sentence, focuses on how such functions can be constructed from given information (or a lack thereof) and how they must be manipulated in order to account for new information in a statistical context. I explore the implications of this approach for various types of information, such as imprecise knowledge about moments or dependency/interaction, functional relationships between random variables, statistical models in combination with data, and finally dynamic filtering problems combining all of the above. Being an engineer, I am, of course, obliged to also provide the algorithms and numerical implementation strategies needed to make this theory come to life on a computer.
The fundamental tool that allows me to do most of this—and the result of probably the single flash of true inspiration I had in my five years as a researcher—is the Imprecise-Probability-to-Possibility Transformation .
The Imprecise-Probability-to-Possibility Transformation
Scott Ferson repeatedly scolded me for choosing the lengthy name “Imprecise-Probability-to-Possibility Transformation” for something that is so fundamental to all of my work but it was the obvious choice stating precisely what it does: Inspired by the work of Didier Dubois et al. on transforming single probability measures into a possibility measure , it tells us how to find an elementary possibility function that describes an arbitrary set of probability measures. That is, the set of probability measures dominated by the possibility measure induced by the former, aka its credal set, is a minimal outer approximation of the latter. You will probably want to read this sentence two or three times. It makes sense. I promise!
Ok, I will explain it: We have an initial set of probability measures. From this set, we construct an elementary possibility function via the Imprecise-Probability-to-Possibility Transformation. This elementary possibility function induces a possibility measure, which dominates certain probability measures. The collection of all these dominated probability measures is called the credal set of the elementary possibility function. This credal set is a superset of the initial set of probability measures.
Most of the readers having studied possibility theory will know that the credal sets of possibility measures always adhere to a certain geometry and, thus, we cannot generally make the credal set look exactly like the original set but we can find a ‘best’ possibilistic approximation via this transformation. The terms ‘best’ and ‘minimal’ are defined with respect to a given (plausibility) order of the elementary events, which to specify is the main difficulty when applying the Imprecise-Probability-to-Possibility Transformation. In fact, after studying the details, properties and implications of this transformation, the remainder of my dissertation often reduces to evaluating it under various combinations of sets of probabilities and plausibility orders.
Possibility theory being a rather coarse theory of imprecise probabilities, this transformation is not very useful without a very specific reason to want to restrict one’s discussion to possibilities. One (in my opinion very convincing) reason can be found in the context of statistical inference.
Possibilistic Inferential Models
The first part of my dissertation treats probabilities and possibilities as the (additive and maxitive) measures they are—which is as legitimate as it is a dry run in theory without much practical relevance. In the second part, I discuss the statistical meaning of possibility theory providing a very straight-forward access to the theory of inferential models put forth by Ryan Martin and his collaborators. To better understand it, I recommend checking out Ryan Martin’s SIPTA blog post about their book.
The story of me and Ryan deserves some explanation: It was only through the Risk Institute Online series of talks organized by Scott Ferson’s Institute of Risk and Uncertainty of the University of Liverpool, at which I was invited to present my work, that I got to know about Ryan Martin, who had coincidentally presented a couple of weeks before me, and the theory of inferential models he had developed with his colleagues. I think both he and I immediately realized the close connections between his work and mine upon hearing each other’s talks, and what started off as genuine interest and appreciation in and of the other’s work had grown into a profound scientific exchange and collaboration by the end of my PhD.
Without Ryan’s extensive groundwork , the second part of my thesis would have looked very different. By endowing me with his concept of ‘validity’ and uncovering that all you need to achieve it are, indeed, possibility measures, he had prepared the perfect setting for me to connect my measure-based view of possibilities, culminating in the Imprecise-Probability-to-Possibility Transformation, to his theory. By employing my transformation, I was able to find a more direct way of constructing inferential models bypassing the previous and somewhat unsatisfactory (to me) approach based on the a-, p- and c-step.
A fundamental corollary of the validity criterion is the fact that the level sets of the confidence distributions (elementary possibility functions of unknown parameters) resulting from a valid inferential model are confidence sets (hence the name) in the sense of Neyman and Pearson and the elementary values of a confidence distribution are special p-values. Even though I did not dare to state it as blatantly at that time, it is my fundamental conviction that most of frequentist inference is inherently possibilistic. Apart from the above observations, this claim is further substantiated by several properties I show, such as the fact that established rules for combining (independent and dependent) p-values can be used to construct multivariate possibility distributions under the aforementioned various types of independence/interaction and vice versa. Regardless of whether the fundamental claim of mine is actually true or not, I definitely support Ryan’s initial argument that possibility theory is deeply connected to frequentism . For instance, I show how to do arithmetic with confidence distributions—both in theory and on a computer–and am thereby reinventing Scott Ferson’s old idea of building fuzzy sets (aka possibility distributions) by stacking nested confidence sets and manipulating them according to the extension principle (with a few tweaks).
In the remainder of my dissertation, I derive possibilistic inferential models for filtering problems, where the goal is to infer the current state of a dynamical system, and I employ the resulting possibilistic filter in a localization problem, where a robot must determine its position by observing some landmarks, in order to demonstrate the practical applicability of the filter and of possibilistic inference in general.
After reading my dissertation as part of my committee, Ryan gave me the biggest compliment that I have received in my entire (and, admittedly, brief) scientific career by following up on my ideas and developing a theory of statistical inference under partial prior information  with the Imprecise-Probability-to-Possibility Transformation as the (in my opinion) fundamental tool to construct the corresponding valid inferential models.
To elaborate a little more on this, Ryan suggests viewing statistical inference in the context of a spectrum of prior information where frequentist inference (no prior information) lies on the one end and Bayesian inference (precise prior information) lies on the other end. In my dissertation, I focus on the former case. When he sent me his draft of the paper, I was struck by the beauty of this new idea and began spamming his inbox with questions, suggestions, and—being more familiar with the details of the implementation of the Imprecise-Probability-to-Possibility Transformation—results of example computations at a rate of approximately one mail every five minutes. I did not even give the poor guy time to respond to one mail before writing the next. Eventually, we joined forces and presented a joint paper  at the BELIEF 2022 conference, which I consider to be among the best I have been a part of.
I would love to talk more about these ideas because I find them so very exciting but this blog post is not the place for it and I defer any future discussion to Ryan and you, dear readers, who I trust to give them the attention they deserve and make something great out of them.
During my time as an active researcher I was often asked why I was working on possibility theory instead of a more mature/powerful theory of imprecise probabilities.
What I like about possibility theory is its simplicity and elegance because it describes uncertainty by just a single (elementary possibility) function. In fact, the definition of possibility theory I adopt is not even its most general form since we can easily find possibility measures that are not induced by any elementary possibility function! Of course, this simplicity comes at the expense of expressiveness and generality; many of the readers of this blog post will be working on theories that are far more advanced. In fact, presenting my work on possibility theory at ISIPTA conferences etc. sometimes felt like being the toddler with a toy truck next to a big construction site. It is a testament to the kindness of the SIPTA community that I was never treated as such.
My preference for possibility theory may best be explained by my introduction to the subject of imprecise probabilities, which was probably not the standard way—if there is such a thing. Being a student of Michael Hanss, who started his career as a researcher of fuzzy sets and fuzzy arithmetic, this was also the topic I started my PhD with. I got into imprecise probabilities—in particular, into possibility theory as a special case thereof—only by wanting to find objective criteria for choosing the fuzzy set membership function, a topic that never quite made sense to me in the fuzzy literature and was, in my opinion, carelessly neglected. After studying much, but by far not all, of the extensive work of Didier Dubois (Didier emphatically encouraged me to continue in this direction as he was trying to convince the established fuzzy community of striking new and unfamiliar paths with limited success; I witnessed this at the EUSFLAT 2019 conference) and his collaborators , I began to understand the connections between possibility theory and imprecise probabilities, and I suspected that there was a lot left to be discovered. In fact, I still do—just read the concluding chapter of my dissertation. By following that path, I found my playground.
Michael, whose role in my doctoral endeavour cannot be overestimated, was very enthusiastic about me pursuing these new ideas and he gave me just the right amount of reinforcement and freedom to do so. Moreover, he never failed to challenge me with well-founded critical questions and to remind me that all the theory I was uncovering should be backed up by some practical applications. By providing my solutions to example problems from statistics and engineering, I aimed at answering his questions and demonstrating the practical relevance of possibility theory in my dissertation.
In conclusion, I firmly believe that there are very good reasons for possibility theory to be pursued and I hope to have convinced some of you that it deserves a spot among the many theories of imprecise probabilities. I will be following its future development from outside academia with great interest.
 Dominik Hose. Possibilistic Reasoning with Imprecise Probabilities: Statistical Inference and Dynamic Filtering. Shaker Verlag, 2022. https://dominikhose.github.io/dissertation/diss_dhose.pdf
 Dominik Hose and Michael Hanss. A universal approach to imprecise probabilities in possibility theory. International Journal of Approximate Reasoning, 133:133–158, 2021.
 Didier Dubois, Laurent Foulloy, Gilles Mauris, and Henri Prade. Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliable computing, 10(4):273–297, 2004.
 Ryan Martin and Chuanhai Liu. Inferential models: reasoning with uncertainty. CRC Press, 2015.
 Ryan Martin. An imprecise-probabilistic characterization of frequentist statistical inference. arXiv preprint arXiv:2112.10904, 2021.
 Ryan Martin. Valid and efficient imprecise-probabilistic inference across a spectrum of partial prior information. arXiv preprint arXiv:2203.06703, 2022.
 Dominik Hose, Michael Hanss, and Ryan Martin. A practical strategy for valid partial prior-dependent possibilistic inference. In International Conference on Belief Functions, pages 197–206. Springer, 2022.
 Didier Dubois. Possibility theory and statistical reasoning. Computational statistics & data analysis, 51(1):47–69, 2006.