Multimodal Sleep State Estimation

Explainable deep learning for EEG, ECG, and wearable fusion

Abstract

This project builds an explainable sleep-state estimation stack that fuses hospital-grade polysomnography (EEG, EOG, EMG) with wearable ECG and actigraphy data. By combining convolutional encoders with cross-modal Transformers, we produce consistent hypnogram predictions even when certain modalities are missing during home monitoring.

Key Features

  • Cross-modal fusion of EEG, ECG, and PPG streams for resilient inference
  • Attention-based explanations that highlight waveform segments driving each sleep-stage decision
  • Domain adaptation bridging in-lab PSG datasets and at-home wearable signals
  • Edge deployment through model compression and on-device calibration workflows

Research Outputs

  • Sleep staging datasets curated from hospital studies and remote coursework assignments
  • Visualization dashboards for clinicians and participants
  • Transfer-learning recipes to personalize models with only a few calibration nights
Jaewoo Baek
Jaewoo Baek
Researcher, Hanwha Systems

His research interests include biological signal processing, machine learning, deep learning and reinforcement learning.

Suwhan Baek
Suwhan Baek
Researcher, Posco Holdings

His research interests include Medical AI, Auto ML, reinforcement learning, generative models, and SNN.

Hyunsoo Yu
Hyunsoo Yu
Researcher, LG Innotek

His research interests include experiment setting, signal processing, machine learning and artificial intelligence.