SIPTA Seminars
Are you a curious student that has just started to explore the topic of imprecise probabilities?
Or an experienced researcher that would like to keep in touch with the community?
Join us at the SIPTA Seminars, an online series of seminars on imprecise probabilities (IP).
The seminars are open to anyone interested in IP, and are followed by a Q&A and open discussion.
They take place roughly once per month, with a break over the summer.
Topics range from foundational IP theories to applications that can benefit from IP approaches.
Details about the individual seminars are available in the list below.
Close to the date of the next seminar, a zoom link will be provided there as well, which is freely accessible.
If you click it, you will first be taken to a waiting room; please be patient until the organizers let you in.
During the talk, questions should be put in the chat, and the audience is expected to mute their microphones.
After the talk, there will be time for Q&A and discussion, at which point you can turn on your microphone when you want to contribute.
The talk (but not the Q&A and discussion) will be recorded, and will afterwards be made freely available on the SIPTA Youtube channel.
The organisation is taken care of by Sébastien Destercke, Enrique Miranda and Jasper De Bock.
If you have questions about the seminars, or suggestions for future speakers, you can get in touch with us at seminars@sipta.org.
Suggestions for prominent speakers outside the IP community, whose work is nevertheless related to IP, are especially welcome.
Upcoming Seminars
Falsification, Fisher's underworld of probability, and balancing behavioral & statistical reliability
Ryan Martin
18 October 2023, 15:00 CEST
Zoom link: https://utc-fr.zoom.us/j/82973185259Statisticians develop methods to assist in building probability statements that will be used to make inference on relevant unknowns.
Popper argued that probability statements themselves can’t be falsified, but what about the statistical methods that use data to generate them?
Science today is largely empirical, so if statistical methods’ conversion of data into scientific judgments can’t be scrutinized, then it’s not fair to expect society to “trust the science.”
Fisher’s underworld of probability concerns layers below the textbook surface level, where knowledge is vague and imprecise.
Roughly, suppose that an agent quantifies his uncertainty about a relevant unknown via (imprecise) probability statements, which defines his betting odds.
Now suppose that a second agent, who may not have her own probability statements about the relevant unknown, believes that the first agent’s assessments are wrong and can formulate odds at which she’d bet against the first agent’s wagers.
If the second agent wins in these side-bets, then she reveals a shortcoming in the first agent’s assessments.
I claim that the statistical method and “society” above are like the first and second agents here, respectively, and that scrutiny of a statistical method proceeds by giving “society” an opportunity to bet against its claims.
In this talk, I’ll carry out this scrutiny formally/mathematically and present some key take-aways.
No surprise, a statistical method that’s falsification-proof in this sense is the behaviorally most reliable and conservative generalized Bayes rule.
More surprising, however, is that a necessary condition for being falsification-proof is a statistical reliability property – called validity – that I’ve been advocating for recently.
It follows, then, from the false confidence theorem that statistical methods quantifying uncertainty via precise probabilities can typically be falsified in this sense.
More generally, since validity also implies certain behavioral reliability properties and needn’t be overly conservative, my new possibilistic inference framework (which I’ll describe and illustrate) is a promising way to balance the behavioral and statistical reliability properties.
There’s no paper yet on the exact contents of this talk, but some relevant material can be found at https://arxiv.org/abs/2203.06703 and https://arxiv.org/abs/2211.14567.
We present recent results on the intimate connections between causal inference and imprecise probabilities.
A structural causal model is made of endogenous (manifest) and exogenous (latent) variables.
We show that endogenous observations induce linear constraints on the probabilities of the exogenous variables.
This allows to map causal models to credal networks.
Causal inferences, such as interventions and counterfactuals, can be obtained by credal network algorithms.
These natively return sharp values in the identifiable case, while intervals corresponding to the exact bounds are produced for unidentifiable queries.
Exact computation will be inefficient in general, given that, as we show, causal inference is NP-hard, even for simple topologies.
We then target approximate bounds via a causal EM scheme.
We evaluate their accuracy by providing credible intervals on the quality of the approximation; we show through a synthetic benchmark that the EM scheme delivers accurate results in a fair number of runs.
We also present an actual case study on palliative care to show how our algorithms can readily be used for practical purposes.Optimal Transport (OT) seeks the most efficient way to morph one probability distribution into another one, and Distributionally Robust Optimization (DRO) studies worst-case risk minimization problems under distributional ambiguity.
It is well known that OT gives rise to a rich class of data-driven DRO models, where the decision-maker plays a zero-sum game against nature who can adversely reshape the empirical distribution of the uncertain problem parameters within a prescribed transportation budget.
Even though generic OT problems are computationally hard, the Nash strategies of the decision-maker and nature in OT-based DRO problems can often be computed efficiently.
In this talk we will uncover deep connections between robustification and regularization, and we will disclose striking properties of nature’s Nash strategy, which implicitly constructs an adversarial training dataset.
We will also show that OT-based DRO offers a principled approach to deal with distribution shifts and heterogeneous data sources, and we will highlight new applications of OT-based DRO in machine learning, statistics, risk management and control.
Finally, we will argue that, while OT is useful for DRO, ideas from DRO can also help us to solve challenging OT problems.
Past Seminars
The management of environmental hazards is largely based on the assessment of risk; i.e., risk for human health and/or for ecosystems (water, soil, air, ...).
Environmental risks pertain to real-world systems that are typically incompletely known; hence risk is affected by significant uncertainty.
Historically, the treatment of uncertainties in risk assessments has relied largely on the use of oftentimes subjective single probability distributions.
Since around 30 years, alternative uncertainty theories have been applied, including the general framework of imprecise probabilities with, e.g., possibility theory and belief functions.
This presentation will provide specific examples of application of such uncertainty theories to environmental hazards, with a special focus on hazards related to soil and water pollution.
It will show how possibility theory and belief functions are well suited for representing information typically available in these contexts.
The issue of uncertainty propagation in risk assessment modelling will be addressed and also the question of communication on risk and associated uncertainties with third-party stakeholders.The Poisson process is one of the more fundamental continuous-time uncertain processes.
Besides its appearance in many applications, it is also interesting because there's several equivalent ways to define/construct it.
In this talk I will treat a couple of these definitions, will explain how I succeeded in generalising one of them to allow for imprecision, and will discuss how one could go about generalising the others.
Furthermore, I will explain how this work on the Poisson process fits in my more general push to advance the theory of Markovian imprecise (continuous-time) processes, and will touch on some (un)solved problems.The theory of belief functions is a powerful formalism for uncertain reasoning, with many successful applications to knowledge representation, information fusion, and machine learning.
Until now, however, most applications have been limited to problems (such as classification) in which the variables of interest take values in finite domains.
Although belief functions can, in theory, be defined in infinite spaces, we lacked practical representations allowing us to manipulate and combine such belief functions.
In this talk, I show that the theory of epistemic random fuzzy sets, an extension of Possibility and Dempster-Shafer theories, provides an appropriate framework for evidential reasoning in general spaces.
In particular, I introduce Gaussian random fuzzy numbers and vectors, which generalize both Gaussian random variables and Gaussian possibility distributions.
I then describe an application of this new formalism to nonlinear regression.Knightian Uncertainty in Finance and Economics
Frank Riedel
30 March 2023, 15:00 CET
Watch on YouTube The talk discusses the foundations of decision making under Knightian uncertainty, i.e. in situations when the relevant probability distributions are unknown, or only partially known.
After reviewing the basic concepts and models that have been developed in decision theory on the one hand, and mathematical finance on the other hand, we put special emphasis on markets under uncertainty in so-called identified models where recently some substantial progress has been made.Imprecise Probability’s roots grow in de Finetti’s fertile foundations of coherent decision making.
A continuing theme in de Finetti’s work is that coherence does not require expectations to be countably additive.
In this presentation (involving joint work with Jay Kadane, Mark Schervish, and Rafael Stern) I review two contexts: the first one about probabilities for (logical) Boolean algebras; the second one about admissibility in statistical decision theory, that require merely finitely additive (not countably additive) expectations.
The common perspective for viewing the two contexts is through the requirement of countably additivity – as presented in Kolmogorov’s theory – as a continuity principle.
In each of the two contexts such continuity is precluded, but in very different ways.Engineers design components, structures and systems and plan activities to extend their service lives despite a limited understanding of the underlying physics and/or the availability of sufficient informative data.
A big challenge is to deal with unknown and uncontrollable variables such as changes on the environmental conditions, deliberated threats, change of intended use, etc.
As a result of this, large safety factors are usually adopted in order to mitigate the use of approximate methods and deal with uncertainty.
Often the methods for dealing with uncertainty assume a complete knowledge of the underlying stochastic process.
This wide availability of information is however rarely the case in practice.
Although imprecise probability offers the tools to cope with lack of knowledge and data, it is not largely adopted in practice.
One of the main reasons is the lack of accessible and efficient tools, both analytical and numerical, for uncertainty quantification.
On top, there exists still a lack of awareness of the potential capabilities of imprecise probability theory and its applications.
In this seminar, we are presenting the challenges in the application of imprecise probability to practical engineering problems
These challenges have been the driver for several novel algorithms and approaches that are going to be presented.Argumentation techniques have received significant attention in Artificial Intelligence, particularly since 1995, when Dung proposed his "argumentation frameworks" and showed that they unify many branches of knowledge representation.
Argumentation frameworks that deal with uncertainty have been explored since then; often, these frameworks rely on imprecise or indeterminate probabilities.
Indeed, probabilistic argumentation frameworks may be one of the most promising applications of imprecise probabilities in Artificial Intelligence.
This talk will review the main ideas behind argumentation frameworks and how they are often connected with imprecise probabilities.Lower probabilities, defined as normalised and monotone set functions, constitute one of the basic models within Imprecise Probability theory.
One of their interpretations allows building a bridge with coalitional game theory: the possibility space is regarded as a set of players who must share a reward, events represent coalitions of players who collaborate in order to obtain a greater reward, and the lower probability of a coalition represents the minimum reward that this collaboration can guarantee.
This correspondence makes lower probabilities and coalitional games formally equivalent, being the notation, terminology and interpretation the only difference.
As an example, coherent lower probabilities are the same as exact games, the credal set of the lower probability is referred to as the core of the game,...
In this presentation I dig into this connection, paying special attention to game solutions and their interpretation as centroids of the credal set.
In addition, I show that if we move to the more general setting of lower previsions, it is possible to represent information about the coalitions and their rewards that cannot be captured by the standard coalitional game theory.
This shows that lower previsions constitute a more general framework than the classical theory of coalitional games.We develop a representation of a decision maker's uncertainty based on e-values, a recently proposed alternative to the p-value.
Like the Bayesian posterior, this e-posterior allows for making predictions against arbitrary loss functions that do not have to be specified ex ante.
Unlike the Bayesian posterior, it provides risk bounds that have frequentist validity irrespective of prior adequacy: if the e-collection (which plays a role analogous to the Bayesian prior) is chosen badly, bounds get loose rather than wrong.
As a consequence, e-posterior minimax decision rules are safer than Bayesian ones.
The resulting quasi-conditional paradigm addresses foundational issues in statistical inference. If the losses under consideration have a special property which we call Condition Zero, risk bounds based on the standard e-posterior are equivalent to risk bounds based on a `capped' version of it.
We conjecture that this capped version can be interpreted in terms of possibility measures and Martin-Liu inferential models.Imprecise probabilities (IP) capture structural uncertainty intrinsic to statistical models.
They offer a richer vocabulary with which the modeler may articulate specifications without concocting unwarranted assumptions.
While IP promises a principled approach to data-driven decision making, its use in practice has so far been limited.
Two challenges to its popularization are 1) IP reasoning may defy the intuition we derive from precise probability models, and 2) IP models may be difficult to compute.
On the other hand, recent developments in formal privacy present a unique opportunity for IP to contribute to responsible data dissemination.
Case in point is differential privacy (DP), a cryptographically motivated framework endorsed by corporations and official statistical agencies including the U.S. Census Bureau.
I discuss how IP offers the correct language for DP, both descriptive and inferential, particularly when the privacy mechanism lacks transparency.
These challenges and opportunities highlight the urgency to adapt IP research to meet the demands of modern data science.Imprecision in probability theory is often considered to be unfortunate, something to be tolerated, and then only if there is no other way out.
In this talk, I will argue that imprecision also has strongly positive sides, and that it can allow us to look at, approach and deal with existing problems in novel ways.
I will provide a number of examples to corroborate for this thesis, based on my research experience in a number of fields: inference and decision making, stochastic processes, algorithmic randomness, game-theoretic probability, functional analysis, ...