Favorites: Favoriting Drugs, Diagnoses, and more

Favorites: Favoriting Drugs, Diagnoses, and more

Status: Requirements In Progress

Technical Complexity: Hard / Complex

The Problem: Too many choices!

Examples:

  • Diagnosis:

    • Diarrhea has ~60 possible options in KenyaEMR/KenyaNHDD → painful to pick a simple Diarrhea concept for a simple walk-in OPD case!

    • A clinician with a Chest Pain NYD case got tired of searching for the right Dx concept (looked like all of ICD 10 was in the choice list) → got tired and picked a random one (Cardiomegaly)!

    • Users typing “c” instead of “cough” (Sri Lanka EMRs) for a faster UX

  • Drugs: Have to search Drug Order field every single time just to find a common drug you prescribe for most patients (e.g. in some OPDs, most patients are always prescribed Acetaminophen)

 

In Dec 2024 at the OCL Workshop before the OpenHIE Conference in Sri Lanka, members of OCL, OpenMRS, and the Sri Lankan Ministry of Health sat to discuss and brainstorming how to solve the “Too Many Choices” problem. The result was:

image-20250226-224421.png
  • What about a frequency-based approach too?

 

Alternative or Additional Approach: Math-based

  • Using graph theory, identify which of a giant codes' base (eg ICD11’s 36,000+ codes)

  • You have multiple disconnected or lightly connected graphs - because of how things are mapped or linked together. So you need to figure out what are very closely aligned nodes in separate graphs, and cut that overlap out. This would help further reduce functionally duplicative options for clinicians.

  • Another way: Use an LLM to id what’s most likely to be used. Take a medical LLM embedding thing and basically vectorize all of the concepts, and you can do similarity metrics on the embeddings of the things. You might be able to fine tune a BERT-like model on the concepts, but you may not have enough - may need millions, in which case you can LORA BERT (matrix modification trick to fine-tune transformer models with a small amount of data).