Cloud Hosting and Cluster Deployment

Cloud Hosting and Cluster Deployment

This section contains deployment guides into various cloud providers and cluster options.

Can OpenMRS be hosted on a cloud?

  • Yes! You can host OpenMRS in any of the usual hosting providers, such as AWS, Azure, GCP, or in your own regional or government cloud, or any other provider who can host Kubernetes.

  • Default Cloud config: We have set up some default configurations you can use, showing how you can deploy OpenMRS and have it running in a cloud instance in a way that is more scaleable than what you would normally deploy on a peer server (such as an EC2 instance).

  • Cloud Deployment Guides: For further guidance on how to set up OpenMRS in these environments, see the guides below! (Note: The Kubernetes guide specifically is the recommended cloud/cluster deployment that we support for the O3 distribution.)

How is Multi-Tenancy supported?

  • We are working on the building blocks that will make it easier for implementers to run multiple instances of OpenMRS with fully automated setup, upgrades and monitoring across instances.

  • Clustering Support

    • Repo for Clustering support: GitHub - openmrs/openmrs-contrib-cluster: Contains terraform and helm charts to deploy OpenMRS distro in a cluster Contains terraform and helm charts to deploy OpenMRS distro in a cluster. See README for additional details.

    • Caching Library support for Clustering: In May 2025 adding backend support in Spring and Hibernate for the Infinispan caching library, as replacement for previous caching library EH Cache (which didn’t support clustering)

      • Will be available with openmrs-core 2.8.x (included in Platform 2.8)

      • Configurable for local embedded cache or distributed/replicated embedded cache

      • Caches can be created by modules using simple yaml files

      • Used by Hibernate second-level/query cache and for API methods annotated with @Cacheable.

    • Full text search with the ElasticSearch cluster, which can be configured to replace the in-memory Lucene index for high availability and scalability:

      • Available since openmrs-core 2.8.x

      • Example setup available for docker-compose with run instructions.

      • Support in Kubernetes helm chat coming soon.

      • More documentation here.

    • Storage service for storing data in distributed storage:

      • Available since openmrs-core 2.8.x

      • Support for local file system and volumes

      • S3 coming soon

      • Extensible to any storage provider by implementing the StorageService interface from a module.

    • Clustering support includes setting up Horizontal Scaling support

  • Data Segregation Tooling

    • Currently, we recommend the Data Filter Module, as the best currently available option for segregating data between different instances.

    • In the future we hope to work with our community to improve this approach.

  • Using pre-built libraries: We plan to investigate pre-built libraries that can help with easier multi-tenancy support, such as those already supported by Spring, so we do not need to rely on in-house built tools.

  • Database Merging for Centralization (early days)

We want Implementation feedback!

Are you using cloud or any of these resources to host your scaled implementation? Please let us know your challenges and findings!
Post to talk.openmrs.org and tag your post with “cloud”.