Statistical Evidence in AI Systems
Working group Prof. Björn-Hergen Laabs
The application of AI in everyday clinical practice has the potential to revolutionize medical care by enabling treatments to be tailored to individual patients.
However, medicine in particular places high demands on the quality, reliability, and traceability of AI models. Our working group “Statistical Evidence in AI Systems” aims to develop and apply new methods to ensure that these high requirements are met. Key aspects include:
- Quantification of uncertainty
- Interpretability (explainable AI)
- Causal inference
- Data security

Quantifying prediction uncertainty involves determining how reliable an AI prediction is. To do this, we use statistical methods that estimate how stable a prediction is under changing initial conditions.
Most AI systems are black box models. This means that an input is made and the AI returns a result. However, it remains unclear how the AI arrived at this decision. Explainable AI aims to make the complex connections in AI decision-making understandable to the user.
AI systems learn correlations from existing data. The focus is usually on making the best possible prediction and less on whether the correlations are also causal. However, if you want to make a clinical treatment recommendation based on this prediction, you need causal relationships that have been determined through causal inference.
Finally, the data hunger of AI systems poses a risk to individual data security. Federated learning takes the approach that patient data remains at the center where it was collected and that AI learns in a decentralized manner. We can use statistical methods to ensure that the decentralized data is comparable, for example.
Contact

contact information
- telephone: +49 551 3964064
- fax: +49 551 3965605
- e-mail address: bjoern-hergen.laabs(at)med.uni-goettingen.de
- location: Humboldtallee 32, EG 141