Cloud Hosting and Cluster Deployment
This section contains deployment guides into various cloud providers and cluster options.
On this page:
Can OpenMRS be hosted on a cloud?
Yes! You can host OpenMRS in any of the usual hosting providers, such as AWS, Azure, GCP, or in your own regional or government cloud, or any other provider who can host Kubernetes.
Default Cloud config: We have set up some default configurations you can use, showing how you can deploy OpenMRS and have it running in a cloud instance in a way that is more scaleable than what you would normally deploy on a peer server (such as an EC2 instance). Please see the Kubernetes deployment guide.
Cloud Deployment Guides: For further guidance on how to set up OpenMRS in these environments, see the guides below! (Note: The https://openmrs.atlassian.net/wiki/spaces/docs/pages/189464758 guide specifically is the recommended cloud/cluster deployment that we support for the Platform and O3 distribution.)
Cloud Hosting Tutorials Playlist
Cloud Support: How to Launch O3 in a Kubernetes cluster with a single command!
Detailed Tutorial: Deploying OpenMRS on Kubernetes
How is Multi-Tenancy supported?
We are working on the building blocks that will make it easier for implementers to run multiple instances of OpenMRS with fully automated setup, upgrades and monitoring across instances.
Clustering Support
Repo for Clustering support: https://github.com/openmrs/openmrs-contrib-cluster Contains terraform and helm charts to deploy OpenMRS distro in a cluster. See README for additional details.
MariaDB clustering. See MariaDB Cluster.
Available since openmrs-core 2.8.x.
Support for replication and failover.
Supported by Kubernetes helm chat.
Caching Library support for Clustering: In May 2025 adding backend support in Spring and Hibernate for the Infinispan caching library, as replacement for previous caching library EH Cache (which didn’t support clustering). See Infinispan Clustering.
Available since openmrs-core 2.8.x (included in Platform 2.8)
Configurable for local embedded cache or distributed/replicated embedded cache
Caches can be created by modules using simple yaml files
Used by Hibernate second-level/query cache and for API methods annotated with @Cacheable.
Supported by Kubernetes helm chat.
Full text search with the ElasticSearch cluster, which can be configured to replace the in-memory Lucene index for high availability and scalability. See ElasticSearch Cluster.
Available since openmrs-core 2.8.x
Example setup available for docker-compose with run instructions.
Supported by Kubernetes helm chat.
More documentation here.
Storage service for storing data in distributed storage. See Storage Service.
Available since openmrs-core 2.8.x
Support for local file system and volumes
S3 coming soon
Extensible to any storage provider by implementing the StorageService interface from a module.
Supported by Kubernetes helm chat.
Clustering support includes setting up Horizontal Scaling support
Support for Horizontal Scaling: https://openmrs.atlassian.net/browse/TRUNK-6299
NEW! April 2025 10 Minute Video: how to setup O3 on a Kubernetes cluster with a single command: https://youtu.be/yM3YWOq_b_A
Data Segregation Tooling
Currently, we recommend the Data Filter Module, as the best currently available option for segregating data between different instances.
In the future we hope to work with our community to improve this approach.
Using pre-built libraries: We plan to investigate pre-built libraries that can help with easier multi-tenancy support, such as those already supported by Spring, so we do not need to rely on in-house built tools.
Database Merging for Centralization (early days)
We want Implementation feedback!
Are you using cloud or any of these resources to host your scaled implementation? Please let us know your challenges and findings!
Post to talk.openmrs.org and tag your post with “cloud”.
Examples: Recent Talk Threads related to Cloud/Clustering: https://talk.openmrs.org/tag/cloud