Opportunities

Welcome to the Computationally-Enhanced Health Care Laboratory!

Our lab is at the forefront of integrating advanced computational techniques with healthcare practices to improve patient outcomes and clinical interactions. We are excited to share our diverse range of ongoing and planned projects that explore innovative approaches in machine learning, artificial intelligence, and multimodal data analysis. We encourage collaborators and students from all disciplines to explore the potential applications of these technologies in healthcare.

If any projects pique your interest or you have any questions, please do not hesitate to contact us. We look forward to collaborating with you to advance the field of computationally-enhanced health care.

Ongoing Projects

This project aims to develop automated methods and systems for transforming patient-clinician interactions. Overall, it aims to develop a system to capture, process, and interpret the nuances of communication and nonverbal cues between patients and providers during medical consultations. Currently, we are exploring the use of video language models combined with neurosymbolic approaches to enhance clinical interaction analysis.

This project aims to leverage Amazon Mechanical Turk to obtain ground truth on insights for patient-provider visits with a vision for developing supervised machine learning algorithms. 1,000+ one minute clips hosted on Cloudflare Stream will be labelled for emotion by Amazon Turkers via the RedCap survey platform, leveraging Amazon AWS Lambda functions as a survey generation backend.

This project aims to automate the Roter Interaction Analysis System (RIAS), a method for coding medical dialogue using machine learning approaches. RIAS is a widely adopted modification of the Bales system used in the US and Europe and medical exchanges in Asia, Africa, and Latin America.

This project aims to study the contribution of multimodal data in improving the detection and analysis of non-verbal cues in patient-clinician communication. In addition, this project aims to develop automated methods for enhancing non-verbal clinical interactions analysis.

This project aims to have a small but diverse group of people (MDs, APPs, PhDs, patients, anthropologists/sociologists, etc.) view full clinic encounters and their encounter summary to report back what they observed or didn’t observe, keeping in mind the idea of transforming the clinical encounter.

This project is focused on developing a pipeline to de-identify multimodal Protected Health Information (PHI) data, such as audio, video, transcripts, and annotations, within clinical settings. The pipeline aims to safeguard patient data confidentiality for the REDUCE project by leveraging advanced computer vision models and specialized de-identification scripts.

This project aims to create an infrastructure that enables the storage, management, and access control of multimodal data gathered during clinical encounters. The project features a dashboard, among other tools, which provides insights into the data, thereby enhancing both analysis and accessibility.

This research initiative utilizes a multimodal approach to identify and categorize nonverbal communication events between patients and providers. It investigates the potential of large language models (LLMs) to classify nonverbal cues in a zero-shot setting, with the objective of improving the understanding of nonverbal communication within the healthcare sector.

This project aims to utilize large language models, such as GPT, to generate high-quality responses to patient in-basket messages, enhancing the efficiency and quality of patient-provider communication and significantly reducing healthcare providers' documentation burden.  

A preceptorship with The Computationally Enhanced Health Care Laboratory offers a unique opportunity for students to dive into the interdisciplinary study between healthcare and technology, specifically through the lens of machine learning and biomedical informatics approaches. A preceptorship at our lab typically encompasses a variety of projects aimed at integrating artificial intelligence (AI) into healthcare. The core project, REDUCE, seeks to lessen the burden of provider documentation and redefine the learning outcomes from clinical encounters. In addition to REDUCE, the preceptorship encompasses a variety of projects aimed at integrating AI into healthcare. These include leveraging Amazon Mechanical Turk to gain ground truth insights on patient-provider interactions, automating the Roter Interaction Analysis System (RIAS) for coding medical dialogue using transcripts pre-coded by experts, and a book club for gathering various perspectives on clinical encounters from a diverse mix of professionals with the aim of use in supervised machine learning.

Penn has long championed undergraduate research, exemplified by the Penn Undergraduate Research Mentoring Program (PURM). This initiative empowers students in their first or second undergraduate year to delve into groundbreaking research during the summer, mentored by esteemed Penn faculty. PURM offers a unique hands-on exploration and learning opportunity, fostering invaluable skills and insights. Under the guidance of experienced mentors, students engage in cutting-edge projects across diverse disciplines, contributing to scholarly advancements and personal growth. Through PURM, participants expand their academic horizons and develop critical thinking, problem-solving, and collaboration abilities essential for future success. Penn cultivates a culture of innovation and inquiry by investing in undergraduate research, preparing students to tackle real-world challenges and make meaningful contributions to their fields. Through PURM, Penn reinforces its commitment to academic excellence and the development of the next generation of scholars and leaders.

Planned Projects

This project aims to develop robust computer vision techniques for gait abnormality detection in a clinical setting. The goal is to improve state-of-the-art gait analysis by improving the robustness of pose recognition to allow for real-time deployment in diverse environments.