O3 Cluster and Cloud Deployments


Please consider this to be work in progress. We will be developing recommendations, a proof of concept and tooling for cluster and cloud deployments in phases throughout 2024.

The cluster container of choice is Kubernetes. Kubernetes is open-source and supported by all big cloud vendors such as AWS, Azure, Google Cloud, smaller vendors and can be deployed on premise.

We are not providing guidance on deployment and management of Kubernetes itself.

The proposed solution for Kubernetes includes Helm charts for production grade deployments of OpenMRS 3. They are cloud vendor agnostic i.e. they include minimal setup for deploying OpenMRS application, database and HTTP gateway. The database and gateway will be optionally replaceable with services hosted by vendors.

A single Kubernetes cluster is able to handle multiple deployments of OpenMRS with a pool of VMs to support multi-tenancy in a scenario with one OpenMRS application container per tenant and a possibility to share database engine with each tenant using a separate schema.

Container cluster such as Kubernetes provides us heavy lifting for high availability, better resource utilisation than dedicated machines or VMs, logs aggregation and monitoring, upgrades and cloud compatibility.



To be included.


To be included.

Hardware and Software Requirements

Proposed vendors checklist for running a single deployment of OpenMRS:

  1. Kubernetes version 1.29

  2. Min. 2 VMs with 1.5+ GHz CPU, 2+ GB of RAM, 50+ GB of SSD (DB, DB replica, OpenMRS instance, http gateway), recommended: 3 VMs with 1.5+ Ghz CPU 4+ GB of RAM, 50+ GB of SSD.

  3. Optionally (recommended) vendor hosted MySQL 8.x with automated backups.

  4. Optionally (recommended) vendor hosted gateway with load balancing e.g. ELB (AWS), Gateway Load Balancer / Front Door (Azure)


Phase 1: Initial POC (by end of Jul 2024)

Initial features included:

  1. Helm charts for automated deployment.

  2. Terraform scripts to orchestrate deployment.

  3. Proof of concept for AWS.

  4. DB and DB read replica configured out of the box with automated backups.

  5. HTTP gateway for serving frontend and backend requests.

  6. Health checks and automated restarts in case of failures.

Phase 2: Maintenance and upgrade tooling (by end Sep of 2024)

  1. Maintenance pages for upgrades served by the HTTP gateway.

  2. Pre-configured Grafana for aggregated logs, metrics and alarms.

Phase 3: Multi-tenant POC (be end of Dec 2024)

  1. Coordinated upgrades to all tenants with terraform.

  2. Support for multi-tenant deployment with a shared database engine and separate schemas with one OpenMRS 3 instance per tenant.

Future phases:

  1. Support for OpenSearch indexes for patient and concept search for improved speed and HA.

  2. Support for distributed cache for Hibernate and Spring.

  3. Support for OpenMRS service replicas with load-balancer in front for high availability and performance.

    1. Partial support with HTTP sticky sessions (a client session always connected to the same instance).

    2. To be fully implemented requires substantial analysis and adjustments in openmrs-core.