Research

The lab's active research is focused on these central questions. We invite students to join these teams or propose new projects based on the lab's mission and vision.

What high-risk, high-reward opportunities are there for AI in primary care?

The Book Club

 

 

The Book Club is a national, multidisciplinary focus group that reviews actual clinical videos and EHR artifacts following a goal-free methodology. This monthly group observes visits from their unique perspectives, questioning the status quo and proposing alternatives that integrate technology into existing workflows, or suggest changes to the workflow leveraging technology.

Team: Kuk Jang, Basam Alasaly


CLIPS Project Pipeline

 

CLIPS (Clinical Insights from Patient-Provider Snippets) uses crowdsourcing techniques to identify opportunities to improve ambulatory care using AI and advanced technology and to identify broadly interesting observations after watching snippets of clinic visits. These data both provide opportunities for computational research and generate metadata about the role audio versus video plays in identifying clinical insights.

Team: Kuk Jang, Basam Alasaly


 Designing a "Smart Clinic Room" real-time visit feedback Appliance 
 

The Healthcare Room of the Future

We want to better understand what a "Smart" clinic Room might contain to provide an effective way for AI to communicate with patients and health care providers.   This work may leverage Wizard-of-Oz experiments, cognitive walkthroughs, iterative design, or other novel methods.


Reassessing Multimodal Integration in Video Question Answering Datasets

This study analyzes multimodal integration in Video Question Answering (VidQA) datasets using a novel Modality Importance Score derived from Multi-modal Large Language Models (MLLMs). Our critical analysis of popular VidQA benchmarks reveals an unexpected prevalence of questions biased toward a single modality, lacking the complexity that would require genuine multimodal integration. This exposes limitations in current dataset designs, suggesting many existing questions may not effectively test models' ability to integrate cross-modal information. Our work guides the creation of more balanced datasets and improves assessment of models' multimodal reasoning capabilities, advancing the field of multimodal AI.


The intersection of human-AI interaction and healthcare informatics

By focusing on the intersection of human-AI interaction and healthcare informatics, we can help patients better understand their health. Creating tools such as interactive AI-generated summaries and real-time feedback systems could be crucial in designing a future where patients and providers receive real-time, AI-supported feedback on their health status, leading to more informed decision-making during consultations.


The Observer Project:  How can we create a FAIR repository from raw clinic visit data?

A de-identified visit between a patient and their providerWe are developing a pipeline (product TBN later!) that supports automated video de-identification using a combination of NLP, computer vision, and algorithms to obfuscate private information. We are interested in understanding the tradeoff between various privacy-preserving techniques and the utility of video for AI model development/ computer vision projects.


AI Brain Swap with One Clinic Visit Picture

To make the Observer Project data maximally useful to non-medical researchers, it will be important to make the data searchable using an ontology, knowledge graph or other approach that enables high recall and precision.  Work is underway using graph databases, but more ideas are welcomed on this team!

 


Can AI help detect patients at risk for dementia?

Frailty Project ImageFrailty Detection with Compositional Reasoning on Multimodal Models: 

We are developing a system to capture, process, and interpret the nuances of communication and non-verbal cues between patients and providers during medical consultations. These cues will be used to assess cognitive impairment and diagnose age-related frailty. The project development has two aims: First, we will identify the relevant events of interest (EOI) for frailty in patient-provider interactions. Second, we will develop a system to robustly identify EOI in an interpretable, easily extensible, and adaptable manner for varying clinical contexts.

 

 

 

One sign of an impending dementia diagnosis is alternations in the patient's gait.  Although sensors can be used to measure gait, they are cumbersome to set up and take up valuable space in a clinic. We are looking for unobtrusive ways to measure and quantify change in gait over time, using a combination of video, computer vision, and 3d reconstruction.  

 

These projects are supported by funding from Dr. Johnson's NIH Pioneer Award, called REDUCE

Thank you to our funders