FWO Fellow at Ghent University

Dr. Lorin
Werthen-Brabants

I build machine learning systems that are not only accurate, but interpretable, calibrated, and safe; making AI a trusted partner in clinical decision-making.

Trustworthy ML Uncertainty Quantification Healthcare AI Explainability Medical Signals
Featured Publications
Machine-learning-based prediction of disability progression in multiple sclerosis: An observational, international, multi-center study
PLOS Digital Health, 2024
Edward De Brouwer, Thijs Becker, L Werthen-Brabants, ...
Split BiRNN for real-time activity recognition using radar and deep learning
Scientific Reports, 2022
L Werthen-Brabants, Geethika Bhavanasi, Ivo Couckuyt,...
Patient activity recognition using radar sensors and machine learning
Neural Computing and Applications, 2022
Geethika Bhavanasi, L Werthen-Brabants, Tom Dhaene, I...
All publications →
Scroll
Lorin Werthen-Brabants
Ghent University

Machine Learning
Researcher in Health

I am a Postdoctoral Fellow at Ghent University working at the intersection of machine learning, trustworthiness, and healthcare. My research focuses on building models that are not only accurate but interpretable, reliable, and safe for clinical deployment.

I develop methods for uncertainty quantification, explainability, and robustness in deep learning systems applied to medical signals, wearable sensor data, and clinical outcome prediction with the goal of making AI a trusted partner in healthcare decision-making.

I earned my PhD in Computer Science from Ghent University (2023), where my dissertation centred on uncertainty quantification and robust deep learning for time-series problems.

Trustworthy ML Uncertainty Quantification Explainability Radar Sensing Activity Recognition Clinical AI Deep Learning Multiple Sclerosis

What I Work On

Uncertainty Quantification

Developing methods that allow ML models to express calibrated confidence — making it possible to know when to trust a prediction and when to defer to a clinician.

Medical Signal Analysis

Applying deep learning to radar sensors, wearables, and physiological signals for patient activity recognition, monitoring, and real-time classification.

Explainable AI in Healthcare

Building model transparency tools that help clinicians understand, verify, and appropriately trust or question AI recommendations in clinical workflows.

Clinical Outcome Prediction

Predicting disease progression (e.g. multiple sclerosis disability) from longitudinal clinical data, with rigorous validation across international multi-center cohorts.

Robust & Fair ML

Investigating distribution shift, fairness across patient subgroups, and robustness to noise — essential properties for real-world clinical deployment of AI systems.

Real-Time Deep Learning

Designing efficient neural architectures (e.g. Split BiRNN) capable of low-latency inference on streaming sensor data for real-time clinical monitoring applications.

Latest Work

2026
Data-driven hypothesis discovery from disease trajectories in multiple sclerosis
Frontiers in Immunology
2026
Combining Magnetic Resonance Imaging and Evoked Potentials Enhances Machine Learning Prediction of Multiple Sclerosis Disability Worsening
Frontiers in Immunology
2025
Ising Machines for Model Predictive Path Integral-Based Optimal Control
NeurIPS Workshop: 2nd edition of Frontiers in Probabilistic Inference: Learning meets Sampling
2025
Leveraging Hand-Crafted Radiomics on Multicenter FLAIR MRI for Predicting Disability Progression in People with Multiple Sclerosis
Frontiers in Neuroscience
2025
The Role of Trustworthy and Reliable AI for Multiple Sclerosis
Frontiers in Digital Health
All publications →

Speaking

Jun 2025
Trustworthy and Reliable (Deep) Machine Learning for Healthcare Invited
IBEC, Barcelona · Barcelona, Spain
Jun 2024
Trustworthy ML for Healthcare: Challenges and Developments Invited
Winkelhaak, Antwerp · Antwerp, Belgium

View my full CV

Education, positions, publications, grants, and teaching — all in one place.

View Full CV