
As a Ph.D. researcher in human-computer interaction, I build computational models to decode the complexities of human social behavior. My approach combines multimodal machine learning with causal statistical inference to create AI-driven tools that can support and enhance human interactions in domains for social good, such as healthcare and education.
I completed my Ph.D. and M.S. degrees at Carnegie Mellon University, where I had the privilege of being advised by Dr. Louis-Philippe Morency. My dissertation focused on modeling intra-/interpersonal and non-/verbal behavior during social interactions between healthcare professionals and their clients, where I modeled a wide range of social behaviors, from linguistic entrainment to nonverbal gaze aversion. This research was highly interdisciplinary and involved close collaboration with clinical partners at University of Pittsburgh Medical Center and Harvard's McLean Hospital.
My research interests include...
Social Signal Processing: advanced modeling of social interaction, cognitive-affective modeling frameworks, and multimodal communication analysis for both retrospective and real-time applications.
Multimodal Machine Learning: integration of verbal and non-verbal modalities, development and implementation of novel probabilistic and causal modeling techniques, and temporal sequence modeling for dynamic data interpretation.
Affective Computing: emotion recognition systems, predictive modeling of affective states, and enhancement of human-computer interaction through emotion-adaptive interfaces.
Artificial Intelligence: personalized user modeling and adaptation, comprehensive multimodal behavior analysis, and development of explainable and transparent AI systems.
Healthcare & Learning Technologies: AI-driven diagnostic support tools, innovations in computational psychiatry, adaptive intelligent tutoring systems, and the development of personalized learning environments.