An interview with Eyke Hüllermeier
Posted on April 11, 2024 by Michele Caprio[ go back to blog ]
A few weeks ago, Michele had a conversation with Eyke Hüllermeier about his exceptional career, the status of the field of Imprecise Probabilistic Machine Learning, and more. Please enjoy the interview below.
Hello, Eyke.
Thank you very much for agreeing to take part in this interview.
My pleasure!
How did your research journey in Uncertainty Quantification in Machine Learning and
Artificial Intelligence begin?
Actually quite a long time ago.
I’ve been interested in the phenomenon of uncertainty since the time I was a student of computer science, mathematics, and economics.
Statistics and econometrics has been one of the fields I majored in.
This is how I got in touch with standard probability theory.
However, the late 80s and early 90s of the last century saw a boom in fuzzy logic, and I used elements of the theory of fuzzy sets in my Ph.D. thesis on modeling uncertain dynamical systems. This broadened my view of uncertainty, and being active in the computational intelligence community, I became interested in generalisations of probability theory and alternative formalisms for capturing different facets of uncertainty. I used such formalisms to complement standard probability in my habilitation thesis on case-based approximate reasoning.
Starting with the beginning of this millennium, my interest shifted more towards machine learning, but looking at this field from the lens of uncertainty modeling was of course very natural for me. Therefore, uncertainty has always played a prominent role in my work on machine learning, even if it has not always been so much in the focus. It has come more to the fore again in the last couple of years, due to the increased interest that uncertainty quantification is now attracting in general.
How do Imprecise Probabilistic techniques come into play in Uncertainty Quantification?
As I just said, the notion of uncertainty is receiving increasing attention in machine learning research these days, because many practical applications are coming with safety requirements.
In particular, there is a great interest in the distinction between two important types of uncertainty, often referred to as aleatoric and epistemic, and how to quantify these uncertainties in terms of appropriate numerical measures.
Roughly speaking, while aleatoric uncertainty is due to the randomness inherent in the data generating process, epistemic uncertainty is caused by the learner’s ignorance of the true or best model. In a paper that we published together with colleagues from the Faculty of Medicine in 2014, we emphasised the importance of this distinction for machine learning in medical diagnosis, and to the best of my knowledge, this was the first time these terms have been used in a machine learning context. Meanwhile, they have become part of the common machine learning jargon, especially after they have been popularised by the deep learning community.
Anyway, coming back to the question, I think that imprecise probability theory offers a natural foundation for capturing epistemic uncertainty. My current view is that aleatoric uncertainty is properly captured by standard probability distributions, whereas epistemic uncertainty is a kind of second-order uncertainty, namely, uncertainty about the “right” probability distribution. One way to model this second-order uncertainty is via credal sets, which, in this regard, can be considered as an alternative to second-order distributions in Bayesian machine learning.
How do we reconcile the classical Imprecise Probabilistic approaches with Machine Learning problems?
In principle, there is nothing to reconcile, I would say, at least not conceptually, because imprecise probabilities are not at all in conflict with machine learning.
Instead, machine learning is as amenable to imprecise probabilistic generalisations as any other application of statistical inference, although it’s true that standard probability theory is still very much dominating the field of machine learning, arguably also because most machine learning researchers are trained in standard probability and less aware of extensions and alternative formalisms.
Anyway, uncertainty is of major concern in machine learning, that’s out of the question, but there is no reason for why classical frequentist or Bayesian statistics should be the only way of handling this uncertainty. Of course, computational complexity might be an issue, because algorithmic efficiency and scalability are important aspects in machine learning, where training data is getting bigger and bigger. This is also a reason why a standard way of using imprecise probabilities, e.g. via an imprecise prior in Bayesian learning, may not be the most appealing one, considering that Bayesian learning itself is already costly and often infeasible in machine learning applications.
What are some interesting problems you’re working on right now?
One important and still open problem for me is how to generalise standard machine learning methods so as to produce credal predictors, that is, models producing credal sets as predictions.
By standard methods I mean methods based on induction principles such as empirical or structural risk minimisation, more rooted in classical frequentist than in Bayesian statistics.
A few attempts have been made at developing such methods, but arguably not in a very systematic or principled manner.
I would appreciate a framework in which credal predictions follow naturally from first principles, instead of strongly depending on more or less arbitrary choices of parameters.
There are similar problems in generalised Bayesian inference, by the way, where the imprecise posterior strongly depends on how non-informative the imprecise prior is made.
Another challenging problem is the quantification of the uncertainty associated with credal predictions, that is, numerical measures of total, aleatoric, and epistemic uncertainty associated with a credal set. This problem is not new. Independently of any applications in machine learning, scholars like George Klir and his collaborators have worked on it in the past, seeking measures such that total uncertainty decomposes additively into aleatoric and epistemic uncertainty. Interestingly, axiomatically justified measures with desirable mathematical properties can be defined for all three uncertainties, but they do not add up. One way out would be to define only two of them and derive the third one from the imposed additive relationship. It turns out, however, that measures derived in this way don’t have nice theoretical properties, and in a joint paper with Sébastien Destercke, we recently showed that this deficiency is also reflected in their empirical performance in machine learning tasks, which isn’t competitive either. One problem might be that the two types of representations, distributions at the aleatoric level and sets at the epistemic level, are not easy to reconcile. They may require measures of different nature that are not commensurate and which cannot simply be added.
What do you envision for the future?
What are the open (and toughest) questions that remain unanswered?
Given the ultra-fast development of AI that we are witnessing today, and the proliferation of AI systems in society, I think that such systems require real uncertainty awareness.
Currently, this awareness is definitely lacking, as shown for example by the problem of hallucination in Large Language Models like ChatGPT.
As humans, we have a natural feeling for when we are unsure. We then react by expressing ourselves cautiously or abstaining completely from expressing an option, or by gathering additional information to improve our level of knowledge. As long as machines do not have such abilities, they will hardly be perceived as trustworthy partners or experts.