As a Ph.D. researcher in human-computer interaction, I build computational models to decode the complexities of human social behavior. My approach combines multimodal machine learning with causal statistical inference to create AI-driven tools that can support and enhance human interactions in domains for social good, such as healthcare and education.
I completed my Ph.D. and M.S. degrees at Carnegie Mellon University, where I had the privilege of being advised by Dr. Louis-Philippe Morency. My dissertation focused on modeling intra-/interpersonal and non-/verbal behavior during social interactions between healthcare professionals and their clients, where I modeled a wide range of social behaviors, from linguistic entrainment to nonverbal gaze aversion. This research was highly interdisciplinary and involved close collaboration with clinical partners at University of Pittsburgh Medical Center and Harvard's McLean Hospital.
Most recently, as a postdoctoral fellow at CMU's Human Sensing Lab with Dr. Fernando De la Torre and Dr. Lori Holt, I developed computational models of engagement and rapport. This work utilized a uniquely broad and diverse dataset of over 1600 Zoom conversations between individuals from all over the US, providing me with the opportunity to develop and implement new analytical strategies for complex, real-world social data.
I'm currently seeking to apply my expertise in social signal processing, affective computing, and transparent AI as a research scientist in an innovative, academically-focused environment. I am especially eager to continue applying my computational skillset to meaningful applications in psychology and the social sciences.
My research interests include...
Social Signal Processing: advanced modeling of social interaction, cognitive-affective modeling frameworks, and multimodal communication analysis for both retrospective and real-time applications.
Multimodal Machine Learning: integration of verbal and non-verbal modalities, development and implementation of novel probabilistic and causal modeling techniques, and temporal sequence modeling for dynamic data interpretation.
Affective Computing: emotion recognition systems, predictive modeling of affective states, and enhancement of human-computer interaction through emotion-adaptive interfaces.
Artificial Intelligence: personalized user modeling and adaptation, comprehensive multimodal behavior analysis, and development of explainable and transparent AI systems.
Healthcare & Learning Technologies: AI-driven diagnostic support tools, innovations in computational psychiatry, adaptive intelligent tutoring systems, and the development of personalized learning environments.