Research Project

– Neuromorphic Control of Robotic Manipulators using Spiking Neural Networks with Reinforcement Learning
Our research at the Human Brain Neurocomputing Platform Research Center focuses on developing energy-efficient and biologically-inspired robotic control systems using spiking neural networks (SNNs). Unlike traditional artificial neural networks (ANNs) that suffer from high energy consumption and real-time processing limitations, SNNs mimic biological neurons’ spike-based information processing mechanisms, offering superior energy efficiency and temporal dynamics suitable for real-time applications. We are currently developing a neuromorphic hardware-friendly reward-modulated spike timing-dependent plasticity (R-STDP) framework integrated with twin delayed deterministic policy gradient (TD3) reinforcement learning algorithms for 3-degree-of-freedom robotic arm control. This approach simplifies complex neuromorphic learning schemes while enabling on-chip online learning capabilities with significantly reduced computational overhead compared to traditional backpropagation methods. Our work aims to bridge the gap between biological neural computation and practical robotic applications, demonstrating that SNN-based systems can achieve robust adaptive control while maintaining the ultra-low power consumption characteristics essential for next-generation autonomous systems and edge computing applications.

– DA-Capnet: Dual Attention Deep Learning Based on U-Net for Nailfold Capillary Segmentation
Sleep scoring is the first step for diagnosing sleep disorders. A variety of chronic diseases related to sleep disorders could be identified using sleep-state estimation. This paper presents an end-to-end deep learning architecture using wrist actigraphy, called Deep-ACTINet, for automatic sleep-wake detection using only noise canceled raw activity signals recorded during sleep and without a feature engineering method.
As a benchmark test, the proposed Deep-ACTINet is compared with two conventional fixed model based sleep-wake scoring algorithms and four feature engineering based machine learning algorithms. The datasets were recorded from 10 subjects using three-axis accelerometer wristband sensors for eight hours in bed. The sleep recordings were analyzed using Deep-ACTINet and conventional approaches, and the suggested end-to-end deep learning model gained the highest accuracy of 89.65%, recall of 92.99%, and precision of 92.09% on average. These values were approximately 4.74% and 4.05% higher than those for the traditional model based and feature based machine learning algorithms, respectively.
In addition, the neuron outputs of Deep-ACTINet contained the most significant information for separating the asleep and awake states, which was demonstrated by their high correlations with conventional significant features. Deep-ACTINet was designed to be a general model and thus has the potential to replace current actigraphy algorithms equipped in wristband wearable devices.

– Deep-ACTINet: End-to-End Deep Learning Architecture for Automatic Sleep-Wake Detection Using Wrist Actigraphy
Sleep scoring is the first step for diagnosing sleep disorders. A variety of chronic diseases related to sleep disorders could be identified using sleep-state estimation. This paper presents an end-to-end deep learning architecture using wrist actigraphy, called Deep-ACTINet, for automatic sleep-wake detection using only noise canceled raw activity signals recorded during sleep and without a feature engineering method.
As a benchmark test, the proposed Deep-ACTINet is compared with two conventional fixed model based sleep-wake scoring algorithms and four feature engineering based machine learning algorithms. The datasets were recorded from 10 subjects using three-axis accelerometer wristband sensors for eight hours in bed. The sleep recordings were analyzed using Deep-ACTINet and conventional approaches, and the suggested end-to-end deep learning model gained the highest accuracy of 89.65%, recall of 92.99%, and precision of 92.09% on average. These values were approximately 4.74% and 4.05% higher than those for the traditional model based and feature based machine learning algorithms, respectively.
In addition, the neuron outputs of Deep-ACTINet contained the most significant information for separating the asleep and awake states, which was demonstrated by their high correlations with conventional significant features. Deep-ACTINet was designed to be a general model and thus has the potential to replace current actigraphy algorithms equipped in wristband wearable devices.

– Deep ECGNet: An Optimal Deep Learning Framework for Monitoring Mental Stress Using Ultra Short-Term ECG Signals
Stress recognition using electrocardiogram (ECG) signals requires the intractable long-term heart rate variability (HRV) parameter extraction process.
This study proposes a novel deep learning framework to recognize the stressful states, the Deep ECGNet, using ultra short-term raw ECG signals without any feature engineering methods.

– Bio-signal analysis using Deep learning
Bio-signal is very important in health care area. So we analyze various bio-signal data such as ECG(electrocardiography), PPG(Photoplethysmography).We are studying and using deep learning to process bio-signal data well. Furthermore, We are working to develop real-time health monitoring technology.

– Automatic detecting images & data analysis
Recognizing and extracting interested images and captions in a document is difficult but ideal work.We are studying on extracting, analyzing data using deep learning algorithms and then storing it automatically.

– Reinforcement learning for feature selection
Analysis of EMG signal has been an interested topic in recent years for classifying surface myoelectric signal patterns.Myoelectric control is an unconventional method to control the upper limb prostheses, human assisting robots and rehabilitation devices.

– Drowsiness Detection EEG, ECG analysis
Studies have investigated various physiological associations with fatigue to try to identify fatigue indicators.The current study assessed the four electroencephalography (EEG) activities, delta (d), theta (h), alpha (a) and beta (b).

– Reinforcement research using games
Reinforcement learning is an area of machine learning and is inspired by behavioral psychology.This algorithm mimics the process which the humans learn something from mistake. As this algorithm is similar with the way the humans think, it is lekely to be used in area of strong AI.

– Activation in the motor cortex
Motor imagery is a mental process by which an individual rehearses or simulates a given action.From computer science perspective, we are studying the way to estimaing the action from the electrical signal from the brain(EEG).

– Medical Image/Video Processing
White Blood Cells (WBCs), also called leukocytes, are an important part of the immunes system. Hidden infections and undiagnosed medical conditions could be estimated by counting the WBCs.Nailfold capillary is micro-vessel placed under nail. Our research is to build non invasive WBC system by analyze the blood flow within nailfold capillary using machine learning.

– Arrhythmia classification
Arrhythmia is one of the major cardiovascular diseases(CVDs) that sudden deaths of people.We are studying and using deep learning to classify irregular heartbeat.
Furthermore, We are working to develop hyperparameter optimization for deep learning performance.

– Reinforcement Learning Personal Authentication System Using ECG Feature
Electrocardiogram (ECG) data changes daily basis as the measuring point and environment changes. Since ECG data has unique characteristics for individual, we measured and test the dataset for personal authentication. After the measuring process, we reduced its noise by using Finite Impulse Response(FIR) filter.Additionally, we extracted the 31 features such as amplitude, interval, slope, and angle etc. Then those 31 features are entered into reinforcement learning network and received the high-cost features for its outcome.
Now, using the high-cost features and previous 31 features are plugged into Support Vector Machine (SVM) and Random Forest (RF) to get the final output of the following features: amplitude, interval, and angles. As a result, accuracy of combined feature classification are varied. However, the obtained features by reinforcement learning is considerably high.