Multimodal Sleep State Estimation
Explainable deep learning for EEG, ECG, and wearable fusion
Abstract
This project builds an explainable sleep-state estimation stack that fuses hospital-grade polysomnography (EEG, EOG, EMG) with wearable ECG and actigraphy data. By combining convolutional encoders with cross-modal Transformers, we produce consistent hypnogram predictions even when certain modalities are missing during home monitoring.
Key Features
- Cross-modal fusion of EEG, ECG, and PPG streams for resilient inference
- Attention-based explanations that highlight waveform segments driving each sleep-stage decision
- Domain adaptation bridging in-lab PSG datasets and at-home wearable signals
- Edge deployment through model compression and on-device calibration workflows
Research Outputs
- Sleep staging datasets curated from hospital studies and remote coursework assignments
- Visualization dashboards for clinicians and participants
- Transfer-learning recipes to personalize models with only a few calibration nights