Biomedical Engineering Students Share Novel Research Findings


November 16, 2023

ICCV 2023

In the Assistive Robotics & Tele-Medicine (ART-Med) Lab, Biomedical Engineering Professor Chung Hyuk Park and his students study the collaborative innovation between human intelligence and robotic technology, integrating machine learning, computer vision, haptics, and telepresence robotics. Two Ph.D. candidates that Park advises, Mariia Sidulova and Baijun Xie, recently shared the novel research they conducted in prominent journals and conferences in the biomedical engineering field. Learn more about their papers below.

Improving Functional Connectivity Analysis

Generative models, such as Variational Autoencoders (VAEs), are increasingly employed for atypical pattern detection in brain imaging. In the paper “Conditional Variational Autoencoder for Function Connectivity Analysis of Autism Spectrum Disorder Functional Magnetic Resonance Imaging Data: A Comparative Study,” Sidulova leverages VAEs to conduct Functional Connectivity analysis (FC) from function Magnetic Resonance Imaging (fMRI) scans of autistic individuals.

Figure from Mariia's research

This novel study published in Bioengineering in October 2023 compares multiple VAE architectures – Conditional VAE, Recurrent VAE, and a hybrid of CNN parallel with RNN VAE – aiming to establish the effectiveness of VAEs in application FC analysis. Their main finding was that the CNN-based model was shown to be the most effective architecture for the FC analysis, as it showed superior performance in reconstruction with and without conditional information.

Given the nature of autism, it exhibits a higher prevalence among males than females. Therefore, Sidulova and Park also investigated if introducing phenotypic data could improve the performance of VAEs and, consequently, FC analysis. Their research demonstrated that introducing phenotypic data to the model generally improves reconstruction performance and reduces bias in FC analysis.

Developing Artificial Social Intelligence

Figure from Baijun's research

Social intelligence has monumental utility in the daily lives of humans, allowing them to engage with others through complex signals like facial expressions, body motion and speech. Humans easily acquire this through study and experience, but enabling machines to develop such an ability remains challenging. In the paper “Multi-Modal Correlated Network with Emotional Reasoning Knowledge for Social Intelligence Question-Answering,” Xie worked to solve this pressing problem and presented his findings during the Artificial Social Intelligence Workshop of the 2023 International Conference on Computer Vision (ICCV) in Paris, France.

The authors’ proposed solution is a Multi-Modal Temporal Correlated Network with Emotional Social Cues (MMTC-ESC), which employs an attention-based mechanism for modeling cross-modal correlations and utilizes emotional social cues for contrastive learning. Their results suggest that combining multimodal inputs and contrastive loss is beneficial for improving social intelligence learning performance.