Kubernetes

This page is a work in progress. Throughout 2024, we’ll be developing recommendations, proof of concepts, and tooling for cluster and cloud deployments in phases. You can join the discussion and share your thoughts in the Talk thread here.

Our container cluster of choice is Kubernetes—an open-source platform supported by major cloud vendors like AWS, Azure, Google Cloud, and smaller providers. Kubernetes can also be deployed on-premise.

We are not providing guidance on the deployment and management of Kubernetes itself. Please refer to Kubernetes guides on how to set it up and manage or contact your Kubernetes provider. See Kubernetes https://kubernetes.io/docs/setup/. For development we recommend Kind, see kind – Quick Start.

The proposed solution for Kubernetes will include Helm charts for production-grade deployments of OpenMRS 3. These charts are cloud vendor agnostic, meaning they include the minimal setup for deploying the OpenMRS application, database, and HTTP gateway. The database and gateway can be optionally replaced with vendor-hosted services.

A single Kubernetes cluster can handle multiple OpenMRS deployments, with a pool of VMs to support multi-tenancy. This allows one OpenMRS application container per tenant and the possibility of sharing a database engine, with each tenant using a separate schema.

Kubernetes provides significant advantages, including high availability, better resource utilization compared to dedicated machines or VMs, log aggregation and monitoring, seamless upgrades, and cloud compatibility.

How to try it out?

Please note that it is still work in progress.

At this point we have a basic helm chart available for deploying OpenMRS O3 3.1.0.

If you want to try it out, run:

helm install openmrs oci://registry-1.docker.io/openmrs/openmrs

Please refer to https://github.com/openmrs/openmrs-contrib-cluster for more details.

Continue on reading to see the complete proposed solution and roadmap.

Architecture

  • Single-Tenant:

    cluster_single_tenant_o3.drawio.png

    In smaller deployments the MariaDB Galera cluster can be replaced with a 2 node MariaDB cluster that has one primary node and one read replica for high-availability. The Search Index (Solr/OpenSearch) and Distributed Cache (Memcached) are an option in later phases to offload O3 backend and/or support multiple O3 backend replicas.

  • Multi-Tenant:

    cluster_multi_tenant_o3.drawio.png

    In multi-tenant environment there is a dedicated backend and frontend to serve always the same and one tenant. Tenants connect to same storage clusters with dedicated DB schemas, search indexes, caches etc. for each tenant. Depending on the scale tenants can also be distributed across multiple storage clusters. The distribution between storage clusters can be introduced at any point in time.
    In case there is some synchronization (data sharing) between tenants needed there can be introduced supporting services such as Master Patient Index (for sharing patient data).

Prerequisites

Proposed vendor checklist for running a single deployment of OpenMRS:

  • Kubernetes Version: 1.29

  • Minimum: 2 VMs with 1.5+ GHz CPU, 2+ GB of RAM, 50+ GB of SSD (DB, DB replica, OpenMRS instance, HTTP gateway)

  • Recommended: 3 VMs with 1.5+ GHz CPU, 4+ GB of RAM, 50+ GB of SSD

  • Optional (Recommended):

    • Vendor-hosted MySQL 8.x with automated backups.

    • Vendor-hosted gateway with load balancing, e.g., ELB (AWS), Gateway Load Balancer / Front Door (Azure).

Roadmap

Our approach is inspired by the Bahmni sub-community’s documentation, covering 80% of our planned work, with the remaining 20% focusing on configuration changes.

Phase 1: Initial POC (by the end of Sept 2024)

  • Helm charts for automated deployment.

  • Terraform scripts to orchestrate deployment.

  • Proof of concept for AWS.

  • Database and DB read replica configured with automated backups.

  • HTTP gateway for serving frontend and backend requests.

  • Health checks and automated restarts in case of failures.

Phase 2: Maintenance and Upgrade Tooling (after Sept 2024)

  • Maintenance pages for upgrades are served by the HTTP gateway.

  • Pre-configured Grafana for aggregated logs, metrics, and alarms.

Phase 3: Multi-Tenant POC (by end of Dec 2024)

  • Coordinated upgrades for all tenants using Terraform.

  • Support for multi-tenant deployment with a shared database engine and separate schemas for each tenant, with one OpenMRS 3 instance per tenant.

Future Phases

  • Support for OpenSearch indexes for patient and concept search, improving speed and high availability.

  • Support for distributed cache for Hibernate and Spring.

  • Support for OpenMRS service replicas with a load balancer for high availability and performance.

  • Partial support for HTTP sticky sessions (ensuring a client session always connects to the same instance). Full implementation requires substantial analysis and adjustments in openmrs-core.

 

Related pages