Project number
25041
Organization
Kidney ADVANCE Project - NIH/ACABI
Offering
ENGR498-F2024-S2025
Project Background/Scope: The role of the health care encounter – whether on the medical office, clinic, hospital, home or field is critical in obtaining relevant information to guide and direct the delivery and accuracy of care. Studies have shown that > 70% of diagnoses and advancement of care stems from the physician or health worker carefully questioning and observing the patient. Sadly, patient encounters today have become shorter as to time spent, with the physician hampered while performing an exam by the burden of electronic health record (EHR) data entry and use of a computer. Studies have also shown that many correct diagnoses are made by the doctor using information, such as: what and in what way the patient speaks, how the patient looks and acts, how the patient behaves, how the patient sits, how the patient walks, and other information gained by focused, attentive, one–on–one patient encounters and consultations. In routine doctor-patient interactions today much of this information is not being recorded and is lost.
This project will develop tools to be used in a “Smart Patient Exam Room” to capture information that the physician often misses, AS WELL AS DEVELOP TOOLS THAT GO BEYOND – TO EXTRACT NEW INFORMATION – creating VERBAL AND DIGITAL BIOMARKERS OF DISEASE. This project benefits from and builds upon work done by two previous Sr. Design teams who built a basic system to capture sound and image; and from a dedicated room in COM-T for this purpose allocated to this project by the medical school!
Requirements: I. Hardware – re-up a kit for sound and visual capture – can benefit from prior Sr Design team – kit should be portable (for use in any space) and fixed (for use in dedicated exam room in med school), including high fidelity microphones and cameras. II. Software/Computational Tools/AI. The team will build three sets of tools:
1. Voice to text Symptom Frequency Index and related Common/Keyword/other word Index Analysis Tools. Step one: Develop a dictionary of diagnostic terms from the medical “review of systems” (symptoms and signs) and from short recordings of patients with specific diseases, e.g., a patient with dyspnea might say, "I have been having a hard time catching my breath recently. I can't walk around the grocery store without stopping to catch my breath..." all this forming the analytic lexicon. Step two: Voice to Text and Word Frequency Analysis Using a speech to text system, such as Whisper, translate the recorded audio to text. Analyze all words spoken and record their frequency. Then compare the patient’s speech to the dictionary to identify all diagnostic terms/keywords. Then create a rank scale symptom and sign frequency index, including such endpoints as: #times a word was used over an entire conversation, % Diagnostic term used = #times keyword used /all words; inter-word frequency of specific terms; and other endpoints TBD. Step three: Speech sematic and sentiment analysis tools – Develop semantic and sentiment analysis tools to extract meaning and emotional content to develop additional verbal quantitative biomarkers. Step four: Develop standardized report – to display data, upload to EHR or secure cloud, printout. Keywords and endpoints can be graphed and used to compare across "multiple visits" seeing how often the patient uses the diagnostic words, e.g., a patient with dyspnea reports that they are “short of breath” 15 times in their first visit. In subsequent visits they report that they are short of breath 9, 3, 1 and 0 times. This would indicate improvement. Step five: AI Assessment of Diagnosis and Therapy - Determine suggested diagnosis and next steps (diagnosis and therapy via querying open and closed Gen AI (ChatGPT 4o and Claude vs Llama)
2. Facial Affect/Mood/Happiness Indicator - using cameras and facial recognition the team will design a system to analyze mood and happiness based on facial expressive characteristics, grimace and related visual expressive signs. AI will be utilized and machine learning to refine the diagnostic readout.
3. Motion analysis tool - The team will use Google Mediapipe to record patient motion, create a visual skeleton of motion that may be played back and develop basic quantitative gait information in terms of speed, symmetry, stability, sit to stand and related variables that may be compared serially
NOTE: For all tools the team will develop: 1. a recordkeeping and display system for serial trend analysis and 2. Integrate raw and processed data into means of storage and recall from an electronic health record.
Friday afternoon mentoring sessions (for all Kidney/ACABI teams) on a rotating pre-scheduled basis will be in place to provide adequate guidance.
This project will develop tools to be used in a “Smart Patient Exam Room” to capture information that the physician often misses, AS WELL AS DEVELOP TOOLS THAT GO BEYOND – TO EXTRACT NEW INFORMATION – creating VERBAL AND DIGITAL BIOMARKERS OF DISEASE. This project benefits from and builds upon work done by two previous Sr. Design teams who built a basic system to capture sound and image; and from a dedicated room in COM-T for this purpose allocated to this project by the medical school!
Requirements: I. Hardware – re-up a kit for sound and visual capture – can benefit from prior Sr Design team – kit should be portable (for use in any space) and fixed (for use in dedicated exam room in med school), including high fidelity microphones and cameras. II. Software/Computational Tools/AI. The team will build three sets of tools:
1. Voice to text Symptom Frequency Index and related Common/Keyword/other word Index Analysis Tools. Step one: Develop a dictionary of diagnostic terms from the medical “review of systems” (symptoms and signs) and from short recordings of patients with specific diseases, e.g., a patient with dyspnea might say, "I have been having a hard time catching my breath recently. I can't walk around the grocery store without stopping to catch my breath..." all this forming the analytic lexicon. Step two: Voice to Text and Word Frequency Analysis Using a speech to text system, such as Whisper, translate the recorded audio to text. Analyze all words spoken and record their frequency. Then compare the patient’s speech to the dictionary to identify all diagnostic terms/keywords. Then create a rank scale symptom and sign frequency index, including such endpoints as: #times a word was used over an entire conversation, % Diagnostic term used = #times keyword used /all words; inter-word frequency of specific terms; and other endpoints TBD. Step three: Speech sematic and sentiment analysis tools – Develop semantic and sentiment analysis tools to extract meaning and emotional content to develop additional verbal quantitative biomarkers. Step four: Develop standardized report – to display data, upload to EHR or secure cloud, printout. Keywords and endpoints can be graphed and used to compare across "multiple visits" seeing how often the patient uses the diagnostic words, e.g., a patient with dyspnea reports that they are “short of breath” 15 times in their first visit. In subsequent visits they report that they are short of breath 9, 3, 1 and 0 times. This would indicate improvement. Step five: AI Assessment of Diagnosis and Therapy - Determine suggested diagnosis and next steps (diagnosis and therapy via querying open and closed Gen AI (ChatGPT 4o and Claude vs Llama)
2. Facial Affect/Mood/Happiness Indicator - using cameras and facial recognition the team will design a system to analyze mood and happiness based on facial expressive characteristics, grimace and related visual expressive signs. AI will be utilized and machine learning to refine the diagnostic readout.
3. Motion analysis tool - The team will use Google Mediapipe to record patient motion, create a visual skeleton of motion that may be played back and develop basic quantitative gait information in terms of speed, symmetry, stability, sit to stand and related variables that may be compared serially
NOTE: For all tools the team will develop: 1. a recordkeeping and display system for serial trend analysis and 2. Integrate raw and processed data into means of storage and recall from an electronic health record.
Friday afternoon mentoring sessions (for all Kidney/ACABI teams) on a rotating pre-scheduled basis will be in place to provide adequate guidance.