I build machine learning systems that are not only accurate, but interpretable, calibrated, and safe; making AI a trusted partner in clinical decision-making.
I am a Postdoctoral Fellow at Ghent University working at the intersection of machine learning, trustworthiness, and healthcare. My research focuses on building models that are not only accurate but interpretable, reliable, and safe for clinical deployment.
I develop methods for uncertainty quantification, explainability, and robustness in deep learning systems applied to medical signals, wearable sensor data, and clinical outcome prediction with the goal of making AI a trusted partner in healthcare decision-making.
I earned my PhD in Computer Science from Ghent University (2023), where my dissertation centred on uncertainty quantification and robust deep learning for time-series problems.
Developing methods that allow ML models to express calibrated confidence — making it possible to know when to trust a prediction and when to defer to a clinician.
Applying deep learning to radar sensors, wearables, and physiological signals for patient activity recognition, monitoring, and real-time classification.
Building model transparency tools that help clinicians understand, verify, and appropriately trust or question AI recommendations in clinical workflows.
Predicting disease progression (e.g. multiple sclerosis disability) from longitudinal clinical data, with rigorous validation across international multi-center cohorts.
Investigating distribution shift, fairness across patient subgroups, and robustness to noise — essential properties for real-world clinical deployment of AI systems.
Designing efficient neural architectures (e.g. Split BiRNN) capable of low-latency inference on streaming sensor data for real-time clinical monitoring applications.
Education, positions, publications, grants, and teaching — all in one place.