Kubernetes
As of 2025, our container cluster of choice is Kubernetes—an open-source container platform supported by major cloud vendors like AWS, Azure, Google Cloud, and smaller providers. Kubernetes can also be deployed on-premise.
As part of Kubernetes support we provide Helm charts for production-grade deployments of OpenMRS. These charts are cloud vendor agnostic, meaning they include the setup for deploying the OpenMRS application and backend storage (database, search index, file storage). The backend storage can be optionally replaced with vendor-hosted services.
It is probably the most robust, scalable and failure resistant deployment of OpenMRS ever seen!
A single Kubernetes cluster can handle multiple OpenMRS deployments to support multi-tenancy. This allows one OpenMRS application container per tenant and the possibility of sharing database engine, search index or file storage service between all tenants using separate schemas.
Kubernetes provides significant advantages over bare metal machines or VMs, including high availability, replication, better resource utilisation, log aggregation and monitoring, automated deployments, seamless upgrades, and cloud compatibility.
- 1 Video: How to Deploy OpenMRS on Kubernetes
- 2 Kubernetes in OpenStack
- 3 How to try it out?
- 3.1 Local setup
- 4 Architecture
- 5 Requirements
- 6 Features
- 7 Future work
- 8 Contributing
- 9 Support
- 10 Deployment Guide
- 10.1 On-premise
- 10.2 Public cloud
- 10.2.1 Amazon AWS
- 10.3 OpenMRS Helm Deployment
- 10.3.1 Features
- 10.3.1.1 Database
- 10.3.1.1.1 MariaDB with replication
- 10.3.1.1.2 MariaDB Galera Cluster
- 10.3.1.1.3 Vendor provided or externally installed database
- 10.3.1.2 Search Index
- 10.3.1.2.1 Embedded Lucene
- 10.3.1.2.2 ElasticSearch Cluster
- 10.3.1.2.3 Vendor provided or externally hosted ElasticSearch/OpenSearch
- 10.3.1.3 Storage Service
- 10.3.1.4 Infinispan Clustering
- 10.3.1.5 Frontend Replication
- 10.3.1.6 Backend Replication
- 10.3.1.1 Database
- 10.3.1 Features
- 11 Testimonials
Video: How to Deploy OpenMRS on Kubernetes
Kubernetes in OpenStack
We run O3 in Kubernetes in OpenStack. It’s available at https://o3-k8s.openmrs.org/ .
It’s deployed with Terraform. You can see how simple is the Terraform setup at https://github.com/openmrs/openmrs-contrib-itsm-terraform/blob/master/kubernetes/main.tf .
How to try it out?
Given you have kubectl connected to a Kubernetes cluster and helm installed, run:
helm upgrade --install --create-namespace -n openmrs \
--set global.defaultStorageClass=standard --set global.defaultIngressClass=nginx openmrs \
oci://ghcr.io/openmrs/openmrswhere global.defaultStorageClass is set to a persistent volume storage class defined in your cluster and global.defaultIngressClass is set to your ingress class name.
It will provision the OpenMRS cluster with the default settings (testing only). Please see the Deployment Guide below for all options and production deployments.
It may take 10-20 minutes for all services to come up depending on available hardware resources.
Local setup
If you don’t have a Kubernetes cluster or you want to try it out locally, you can set up a Kind cluster on your machine. Make sure that you have git, kubectl, helm and kind installed.
To install on Mac OS:
brew install git kubectl helm kindOther install options:
Git: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
Kind: https://kind.sigs.k8s.io/docs/user/quick-start/#installing-from-release-binaries
# Checkout git repo
git clone https://github.com/openmrs/openmrs-contrib-cluster
cd openmrs-contrib-cluster/helm
# Create kind cluster with 3 nodes
kind create cluster --config=kind-config.yaml
# Set kubectl context to your local kind cluster
kubectl cluster-info --context kind-kind
# Create local path provisioner and ingress
kubectl apply -f kind-init.yaml
# Setup Kubernetes Dashboard
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard --set extraArgs="--token-ttl=0"
# Create token for login
kubectl -n kubernetes-dashboard create token admin-user
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
# Go to http://localhost:8443/ and login with generated tokenOnce you have Kubernetes up and running. You can build and install the OpenMRS helm chart from sources with:
cd openmrs
./helm_build.sh
helm upgrade --install --create-namespace -n openmrs --values ../kind-openmrs.yaml openmrs .Continue on reading to learn more about the architecture and different features.
Architecture
Grafana logging is not yet provided in version 1.0.0. It will be added in the next version,.
Single-Tenant:
In smaller deployments the MariaDB Galera Cluster can be replaced with MariaDB with replication that has one primary node for read and write access and one or more read replicas for high-availability (HA) and improved read performance.
The Search Index (ElasticSearch), File Storage Service Cluster (MinIO) and Clustered Cache (Infinispan) can be optionally disabled for smaller deployments to reduce cluster size and hardware requirements.
The number of replicas for all services can be decreased or increased based on the expected load and HA requirements.Multi-Tenant:
In multi-tenant environment there is an additional OpenMRS backend and frontend service to serve always the same and one tenant. Tenants may connect to same storage clusters with dedicated DB schemas, search indexes, caches etc. for each tenant. Data is not shared between tenants in such deployments by default. Depending on the scale, tenants can be distributed across multiple storage clusters, which can be added at any point in time.
In case there is some synchronisation of data (data sharing) between tenants additional services need to be introduced. The synchronisation mechanisms are not part of the helm chart.
Requirements
Proposed vendor checklist for running a single deployment of OpenMRS:
Kubernetes Version: 1.29+
Minimum: 2 VMs with 1.5+ GHz CPU, 2+ GB of RAM, 50+ GB of SSD (DB, DB replica, OpenMRS backend and frontend)
Recommended: 4 VMs with 1.5+ GHz CPU, 4+ GB of RAM, 50+ GB of SSD (as in single-tenant architecture diagram)
Optional vendor provided services:
MySQL 8.x with automated backups.
ElasticSearch or OpenSearch
S3 compatible file storage service
Ingress with load balancing, e.g., ELB (AWS), Gateway Load Balancer / Front Door (Azure).
Features
Helm charts for automated deployment.
Terraform scripts to orchestrate deployment.
Proof of concept for AWS AKS (Kubernetes) cluster.
MariaDB with Replication with one primary node and multiple read replicas.
Load-balancing between read replicas.
Automated failover and recovery handling for read replicas.
MariaDB Galera Cluster with multiple read-write nodes.
Load-balancing between all nodes.
Automated failover and recovery handling for all nodes.
ElasticSearch Cluster.
Load-balancing between all nodes.
Automated failover and recovery handling for all nodes.
Infinispan Clustered Cache with replication and invalidation between nodes.
Health checks and automated restarts in case of failures.
HTTP sticky session load-balancing and rolling upgrades for OpenMRS backend.
Load-balancing and rolling upgrades for OpenMRS frontend.
Please note that deploying multiple replicas of OpenMRS Backend is still experimental. It works for OpenMRS Platform 2.8.1, but not all modules have been updated and tested to work in such a scenario. Right now we are focusing on updating modules included in OpenMRS 3 distribution to use Storage Service and be replication ready. Please use it with caution.
Future work
Pre-configured Grafana for aggregated logs, metrics, and alarms.
Automated backups for MariaDB Cluster and MariaDB Galera Cluster.
Maintenance pages for planned upgrades.
System status page for users.
Multi-tenant support for shared storage clusters (separate data).
Support for distributed HTTP sessions.
Auto-scaling for services.
EIP tools for integrating services and sharing data between tenants.
ETL service for data analytics.
Contributing
We welcome contributions! Please reach out to @Rafal Korytkowski via http://talk.openmrs.org (tag @raff) or see https://openmrs.atlassian.net/browse/TRUNK-6299 for on-going work.
Explore https://github.com/openmrs/openmrs-contrib-cluster/ where all the code is stored and where you can contribute pull requests.
Support
We are looking forward to working closely with any implementer that wants to use the OpenMRS helm chart in production. Please reach out to @Rafal Korytkowski via http://talk.openmrs.org (tag @raff).
Deployment Guide
We are not providing support on the deployment and management of Kubernetes itself. Please refer to Kubernetes guides on how to set it up and manage or contact your Kubernetes provider. We do provide some recommendations, which you can see below.
On-premise
There are many ways to setup Kubernetes on-premise these days. We recommend setting up and managing Kubernetes with Rancher https://www.rancher.com/quick-start. See also Kubernetes https://kubernetes.io/docs/setup/. Please see as an introduction video:
You will also need to run a load-balancer in front of your Kubernetes cluster, if you want to achieve true high-availability. We recommend MetalLB https://metallb.io/.
We recommend Nginx Ingress Controller https://kubernetes.github.io/ingress-nginx/deploy/for ingress and Local Path Provisioner https://github.com/rancher/local-path-provisioner?tab=readme-ov-file#deployment for local storage.
You may also want to see https://github.com/openmrs/openmrs-contrib-cluster/tree/main?tab=readme-ov-file#helm, where we show how to create and configure a local cluster for development.
Once you have Kubernetes cluster with IngressClass and StorageClass configured, you can continue to Helm deployment.
Public cloud
Amazon AWS
As a proof of concept we provide terraform scripts for AWS EKS deployment.
Before you proceed please apply roles and policies from https://github.com/openmrs/openmrs-contrib-cluster/tree/main/terraform/aws in AWS IAM console to be able to run deployment.
Given you have GIT, Terraform, AWS CLI and Kubectl installed and configured, you shall start from cloning https://github.com/openmrs/openmrs-contrib-cluster/tree/main/ and executing:
cd terraform-backend
terraform init
terraform apply
cd ../terraform
terraform init
terraform apply -var-file=nonprod.tfvars
cd ../terraform-helm
terraform init
terraform apply -var-file=nonprod.tfvarsTerraform-backend is responsible for creating an S3 bucket and DynamoDB to store terraform state. It only needs to be run once.
Terraform contains the actual setup. It includes VPC, EKS, RDS, SES. Please see https://github.com/openmrs/openmrs-contrib-cluster/blob/main/terraform/nonprod.tfvars for possible configuration options. RDS and SES can be disabled or enabled based on your needs.
In the config file you can specify your EKS cluster environment, node type, desired number of nodes, RDS class, etc.
To connect Kubectl to your cluster you can run:
aws eks update-kubeconfig --name openmrs-cluster-nonprodThe cluster name is openrms-cluster-$environment.
The final terraform-helm is responsible for running helm deployment of OpenMRS. Please see Helm Deployment below for more details.
OpenMRS Helm Deployment
Our openmrs helm chart consist of a few helm charts:
openmrs-backend, see here for versions
openmrs-frontend
It is highly configurable and different components can be enabled or disabled based on needs.
Given the number of configuration options it is recommended to create a configuration file for your deployment. You may use the following to start with:
global:
defaultStorageClass: standard
defaultIngressClass: nginx
openmrs-backend:
image:
repository: openmrs/openmrs-reference-application-3-backend
tag: nightly-core-2.8
replicaCount: 2 # Enabled experimentally
infinispan:
clustered: true
galera:
enabled: true
replicaCount: 3
rootUser:
password: Root123
db:
name: openmrs
user: openmrs
password: OpenMRS123
galera:
mariabackup:
password: Backup123
mariadb:
enabled: false
elasticsearch:
enabled: true
master:
replicaCount: 3
minio:
enabled: true
auth:
rootPassword: Root1234
provisioning:
users:
- username: openmrs
password: OpenMRS123
policies: ["readwrite"]
openmrs-frontend:
image:
repository: openmrs/openmrs-reference-application-3-frontend
tag: nightly-core-2.8All credentials are stored in Kubernetes Secret resources. It is very important to set them to some safe values. They cannot be changed with helm when upgrading, but must be provided.
The defaultStorageClass and defaultIngressClass need to be set according to your cluster configuration. If you deploy in a public cloud, please refer to your cloud provider documentation.
You can deploy OpenMRS with the following command:
helm upgrade --install --create-namespace -n openmrs --values ./deployment-config.yaml openmrs oci://registry-1.docker.io/openmrs/openmrsFor production it is recommended to always specify the helm chart version e.g. --version 1.0.0. Please note it is different from appVersion, which is used as an image tag. Each helm chart version comes with a pre-configured appVersion (usually the latest compatible OpenMRS image version), which you can override to deploy a different image tag. If you override image or tag, you need to make sure that it supports the OpenMRS helm chart features enabled in your environment.
Features
Database
When deploying OpenMRS helm chart you can decide whether you want to use a helm provided MariaDB with replication, MariaDB Galera Cluster or vendor provided or externally installed MariaDB / MySQL database. Please see also https://openmrs.atlassian.net/wiki/spaces/docs/pages/577110042 .
MariaDB with replication
Supported fully since OpenMRS Platform 2.8.1. Only one node may be used in older versions.
By default MariaDB with replication is enabled. It can be disabled with --set openmrs-backend.mariadb.enabled=false.
It is recommended for smaller deployments. By default it runs with a single read-write primary instance and one read-only replica.
You can add additional read replicas with --set openmrs-backend.mariadb.secondary.replicaCount=2. They will be used by openmrs-backend to load-balance read-only transactions among all read replicas. If a replica fails, it is automatically black-listed for 60s and the failed transaction is replayed on any other read replica or the primary instance, if all read replicas are black-listed. After 60s the read replica is included in load-balancing again, if it is recovered or stays black-listed for another minute.
If you increase read replicas, the openmrs-backend needs to be re-deployed in order for the new read replicas to be included in load-balancing.
For all available settings in MariaDB helm chart, please see https://artifacthub.io/packages/helm/bitnami/mariadb .
MariaDB Galera Cluster
Supported fully since OpenMRS Platform 2.8.1. Only one node may be used in older versions.
The MariaDB Galera Cluster with multiple read-write nodes can be enabled with --set openmrs-backend.galera.enabled=true.
It is recommended for larger deployments as it provides the most robust replication, high availability and scalability. It does not rely on a single read-write primary as in MariaDB with replication, rather it provides multiple read-write master nodes. Moreover all transactions are load-balanced between all nodes.
If a node fails, it is automatically black-listed for 60s and the failed transaction is replayed on any other replica. After 60s the replica is included in load-balancing again, if it is recovered or stays black-listed for another minute.
You can modify the number of nodes with --set openmrs-backend.galera.replicaCount=4. The minimal number of replicas to form a cluster is 3, which is the default.
If you increase a number of nodes, the openmrs-backend needs to be re-deployed in order for the new replicas to be included in load-balancing and failover.
For all available settings in MariaDB Galera helm chart, please see https://artifacthub.io/packages/helm/bitnami/mariadb-galera .
Vendor provided or externally installed database
If both MariaDB with replication and MariaDB Galera Cluster are disabled you need to provide your own DB connection details with:
openmrs-backend:
mariadb:
enabled: false
galera:
enabled: false
db:
hostname: yourdb.host
port: 3306
username: yourdb_username
password: yourdb_passwordInstead of the hostname and port, you may specify the full connection URL with openmrs-backend.db.url.
Search Index
Supported only since OpenMRS Platform 2.8.0. Only Embedded Lucene may be used in older versions.
You can decide whether you run OpenMRS Backend with an embedded Lucene search index, a helm provided ElasticSearch or a vendor provided or externally hosted ElasticSearch/OpenSearch. Please see also https://openmrs.atlassian.net/wiki/spaces/docs/pages/450625608.
Embedded Lucene
It is enabled by default and recommended for smaller deployments. It must not be used, if you intend to run multiple replicas of OpenMRS Backend as it is not replicated among all replicas.
It doesn’t provide any replication or high availability as it runs within a single OpenMRS Backend JVM. It is recommended to switch to ElasticSearch/OpenSearch if your OpenMRS Backend instance is overloaded (slowly responding) or using too much memory.
ElasticSearch Cluster
The ElasticSearch Cluster is recommended for larger deployments. It provides replication, high availability and scalability. It is also taking the load caused by full text searches off the OpenMRS Backend instance and allows OpenMRS Backend to be scaled to multiple instances.
It can be enabled with:
openmrs-backend:
elasticsearch:
enabled: true
master:
replicaCount: 3The OpenMRS Backend is automatically configured to use ElasticSearch and load-balance between all nodes. It also handles node discovery and failover. If you increase the number of replicas, it can automatically adjust without re-deploying the service.
For all possible configuration options please see https://artifacthub.io/packages/helm/bitnami/elasticsearch
Vendor provided or externally hosted ElasticSearch/OpenSearch
If you would like to use an externally hosted ElasticSearch/OpenSearch, please see Hibernate Search docs for version compatibility. OpenMRS Platform 2.8.0 uses Hibernate Search 6.2. Please check, which version of Hibernate Search your version of platform uses and consult proper documentation.
Please set the following config options:
openmrs-backend:
elasticsearch:
uris: 'http://elasticsearch1:9200,http://elasticsearch2:9200'
username: openmrs
password: OpenMRS123Storage Service
Supported fully since OpenMRS Platform 2.8.0. Only local volume storage may be used in older versions.
You can decide whether you run OpenMRS Backend with a local volume storage, a helm provided MinIO storage or a vendor provided S3 compatible storage. Please see also https://openmrs.atlassian.net/wiki/spaces/docs/pages/577503378 .
The OpenMRS Backend is configured to use a local volume storage by default. It is recommended for smaller deployments. It must not be used if you intend to run multiple replicas of OpenMRS Backend. You need to take care of volume backup on your own as we don’t provide any tooling around that. You may use replicated volumes in Kubernetes with e.g. https://longhorn.io/ to have some additional failure resistance and backup.
In large on-premise deployments we recommend using MinIO. To use a helm provided MinIO storage add the following to your config:
openmrs-backend:
minio:
enabled:trueThe OpenMRS Backend will be automatically configured to use the MinIO service. At the moment there’s no load-balancing between MinIO nodes and failover handling. We will fix that in the future helm chart releases.
Infinispan Clustering
Supported fully since OpenMRS Platform 2.8.1. You must not run multiple replicas of OpenMRS Backend in older versions.
If you intend to run multiple replicas of OpenMRS Backend it is mandatory to enabled Inifnispan Clustering with --set openmrs-backend.infinispan.clustered=true. It is in order for cache to be replicated and invalidated among all replicas. Please see also https://openmrs.atlassian.net/wiki/spaces/docs/pages/577503357.
Frontend Replication
The frontend service is optional. If your distribution doesn’t ship UI as a separate service, you can disable it with --set openmrs-frontend.enabled=false.
The frontend service is an nginx container serving static files. Replication is enabled by default and set to 2 replicas for the frontend service to support rolling upgrades without downtime. There’s rarely a need to include more replicas, because it’s a very performant service. Replication is there only for HA.
Backend Replication
Please note this feature is still experimental and supported only by OpenMRS Platform 2.8.1. All modules used in such a deployment need to be replication ready, which is explained below.
In order to use 2 or more OpenMRS backend replicas with HTTP sticky session load-balancing and rolling upgrades without downtimes you need to make sure that:
You enable ElasticSearch, MinIO (or other distributed storage service, e.g. S3) and Infinispan clustering.
Deployed modules do not store files in the file system, rather use Storage Service. Please see https://openmrs.atlassian.net/wiki/spaces/docs/pages/577503378.
Deployed modules cache data only by using Spring or Hibernate Cache. Please see https://openmrs.atlassian.net/wiki/spaces/Archives/pages/25504850.
Testimonials
Our terraform scripts are inspired by the Bahmni sub-community’s work and documentation.