This study aims to extract physiological information from facial images using remote photoplethysmography(rPPG), particularly heart rate, which is a critical indicator of various medical events. Existing signal processing-based rPPG algorithms and deep learning architectures have limitations in effectively processing facial video inputs under real-world conditions. To overcome these shortcomings, we propose a quaternion-valued convolutional neural network model to estimate rPPG signals. The quaternion-valued architecture enables simultaneous processing of four-channel inputs, which is well-suited for modeling signals with varyinag phases and capturing complex spatial-color interactions. Color formats such as RGB, YUV and HSL are explored to enrich the input representation, incorporating chrominance, luminance, and brightness components into the quaternion domain. This approach enhances feature extraction for rPPG and improves model robustness under challenging conditions. The effectiveness of the proposed model is demonstrated through extensive experiments under diverse illumination settings. In addition, Motion artifact conditions are simulated in realistic scenarios to evaluate the model’s resilience. The results show that the proposed model achieves consistently high accuracy in heart rate estimation across various environments, highlighting its suitability for remote and non-contact healthcare applications. Overall, This study contributes to the development of reliable and efficient telemedicine systems, enabling continuous and non-invasive vital sign monitoring in future eHealth.