How to Deploy Updates at Your OpenMRS 3 Sites

How to Deploy Updates at Your OpenMRS 3 Sites

Around the world, many Implementer Organizations regularly deploy updates to tens, hundreds, and even thousands of sites across their region, country, or even across multiple continents. 

Common Scenarios


  • Content Updates like Forms, Concepts, and Metadata: E.g. The Ministry calls your team and says they are updating a standard form, and now you need to update that form across all your 100+ sites. Your team updates the form and saves the updated config in GitHub. But then, what do you do next to get that content out to your 100+ sites? Do you deploy to Docker? Make sure there's a service to pull down updates at sites?

  • Locally Hosted Sites: E.g. You have many locations across a country, and new departments need to be added to a locally-hosted site. How do you roll out configuration updates like this to locally-hosted deployments?

  • Low or No IT Support at Sites: E.g. Many sites using OpenMRS have little or no IT-type help on site. Some sites have no “computer experts” at all. Sending technical team members to travel to sites is usually expensive (cost for stipends, fuel, time away from other work), and in some cases impossible (e.g. disasters, conflict zones). Most Implementers cannot send staff to sites in person only to do upgrades and reboots. This means update workflows have to be very simple for non-technical staff to be able to deploy at their site with minimal assistance.

First Steps: How to Update Content Configurations in General

We do not require that people keep their configurations in GitHub; however, this is how we generally store content and config for the global demo EMR, so this is a common pattern. Here is our general guidance on how to handle content updates at the code level:

 

  1. Use Content Packages to keep content separate from code. Your forms, concepts, metadata and other configuration should be in a content package. Reason: This separation of content from the code will make it easier for you to (1) manage, version, and upgrade your content in an organized way, and (2) to compare your code with the global OpenMRS code base, so that you can more easily add up-to-date patches and improvements from the global community.

  2. Release the Content Update: You release an updated version of the content package.

  3. Update your Distribution: You update your distribution to point to the new content package version.

  4. Release Your Distribution: You release a new version of the distribution.

 

Note: If your update is technical (e.g. a Module version upgrade), then Steps 1-4 here apply, except you would be making updates to your code repository instead of the content repository. If you are not using the Content Packages structure (where code and content are separate repositories) then your steps are the same. 

Getting Your Update Out to Sites: How to Deploy Distribution Upgrades


Now you need to deploy the new version of the distribution to your sites. There are many options, and different options work better for different teams. 

Option 1: Docker Image Workflow: This involves publishing everything as Docker images. 

About: 

  1. Rationale: Means you are packaging software you can distribute to any number of clients, and makes updates easier. 

  2. Connectivity: The easiest path here relies on network connectivity; however, for places without internet connection, there are also ways you can move docker containers from a central repository to a site on e.g. a USB stick if absolutely no connection or bringing the server itself to a place where there is connectivity (laptop or black box) (Realistically, the RefApp backend image is over 1 GB, which is probably too large to reasonably transmit over WhatsApp or email), or through a temporary hotspot with a data connection - as long as you have a reliable window of connection (e.g. <2GB: 1-2hrs for 3G networks; 4G ~30-60 mins). Rationale: the images are published to a registry (like Docker Hub or something else). Network connectivity is necessary to connect to the registry and download the images from them.

    1. We don’t currently have the technology to support streaming updates. There are some technical reasons why changes can't realistically be streamed, e.g., Iniz only triggers on app start-up, so metadata updates, at least, usually require an update. 

    2. Many types of changes - including content updates - require a server restart to correctly apply or to apply cleanly and reliably. You would want to coordinate a restart with the facility. 

    3. For any upgrade path, you always want a roll-back pathway; to know that you need to roll-back, you need someone to know that you applied changes. The is fastest and most easily caught by having someone do the upgrade manually.

  3. Requirements: Sites need to have Docker installed

  4. First: You’d turn all your releases into Docker Images.

    1. CI process where you can go in and cut a release which will build all of those images for you automatically, and publish them to the Docker Registry. 

    2. E.g.: Currently for the DRC EMR we have this distribution project which has 3 docker containers and a docker compose file: a separate container for the frontend, a separate container for the backend, a containerized version of the data of a database server, and then a gateway that acts as the initial entry point and makes things all look unified. This is the docker-compose.yml file: https://github.com/path-drc/path-drc-emr/blob/main/docker-compose.yml (Note that instead of tags, each docker image will refer to a specific version #, so you have clear versioning control and tracking, e.g. “v1.2”.) Each site will have a copy of this docker compose file in their instance. 

  5. Then: Publish these to GitHub Packages

    1. The CI/CD process can automatically package and publish your release (with its content updates) into zipped Docker images and publish them on GitHub packages (serve as a "Docker Registry" (which is just a service that stores the Docker files and some metadata about them)).

  6. Then at Sites: 1-command: Someone runs Docker, composeup-d. 

    1. Requirements:

      1. For the 1-command workflow: You will need a script or CLI set up to make this a 1-command job. 

        1. In situations where the instance running the project is not connected to the internet we provide pre-packaged images which can be loaded on the instance. To obtain the image check under the [Releases](https://github.com/path-drc/path-drc-emr/releases ) section and download the `path-drc-emr-images-bundle.tgz` file.

      2. Without a CLI script: someone at the site with access to the server will need to update their docker compose file to point to the new version.

      3. Reference/example CLI set up for the DRC EMR here: _____. 

    2. Use the docker pull command to pull down the version, and the docker compose up command to load any updated images. 

      1. This gives you a mechanism for sites to stay up to date. This still allows the kind of control that allows different sites to be on different versions if you want to roll out updates in a staged way (e.g. if you only want to update a few sites at a time). 

    3. For Locally Hosted Sites: Download and load the images locally, then run the standard dockercompose-up/down. You will need a process in place to download and load images locally. Once the images have been downloaded locally, the rest of the process will be the standard Docker Compose down/up.

  7. Backups: Ensure, in general, that backups are in place for each site.

    1. Rationale: All the important files like the database are stored in Docker Volumes. So for each site you would need to make sure that there is a backup of the data stored in these volumes, because that’s where all the local differences are, i.e. the actual instance setup. 

    2. Our Recommendation: Run a cron job that does a MySQL dumb and another cron job that backs up the application directory. 

  8. Bundle vs Building Docker Images: Bundling could make updates easier, cost less bandwidth (bundled version ~ 250MB; vs direct Docker Images will only transfer what changed which is slower if you are depending on an unstable or satellite connection), and more scaleable/portable with the Java RefApp. So if you want to use things like Kubernetes and Helm Charts, will be way easier if you have Docker Images bundled already.

Option 2: WAR File Workflow: This is actively supported, but some implementers still use this flow. What we’re doing with docker is effectively putting the WAR in a specific place that gets triggered. 

  • This is primarily how UgandaEMR and KenyaEMR have traditionally been deployed.

Manual way (Not Recommended): Your users at sites can update content and configuration using the Admin User Interface (e.g. Location names). However, if the system goes down and reboots: 

  • Changes will persist across restarts. However, they are not written to the persistent metadata (the "content package"). The reason for not recommending changes via the Admin UI is mostly that it makes it harder to know what an OpenMRS instance looks like if metadata have been adjusted locally.

  • anything configured in Initializer CSVs will be set to whatever values are in the Initializer CSVs whenever the server restarts (and if a change the the initializer CSV files in a given domain are detected).  So if you have a particular location in an Initializer CSV, and then you change that location via the admin UI, the next time the server restarts probably the location doesn't change.  But the next time you make any change to the csv that has that location configuration in it, and initializer decides it needs to re-run it, it will overwrite any locally made changes (with some exceptions).

Real-World Examples

  • Uganda: How METS and Partners deploy UgandaEMR across >1,800 sites

    • …….

  • Kenya: How Palladium and Partners deploy KenyaEMR across >2,000 sites

    • …….

  • ICRC: How ICRC HQ deploys updates across >4,000 sites across >30 countries

    • …….

  • PIH: How PIH deploys updates across multiple countries