Continuous Deployment

Would you like to get bug fixes and new features ASAP, rather than waiting for a release. You can now with 'Continuous Deployment'!

Is it safe to run between-release code?

If you're going to run between-release code, you must set up a process where you test it out in a QA environment of your own before you put it in production.  Keep an eye out for the following.

1. There could be features that you don't need that break the application.
2. There is no guarantee that a "done" feature will not have a major change before the software is released.
3. Developers assume that if it has not been released, it is fine to break compatibility from an earlier snapshot version. 
4. Modules rarely make non-backwards-compatible data model changes. But you'll have no warning if it does happen, since module liquibase updates are applied automatically, unlike core ones.

For Mirebalais, PIH runs the latest snapshot of most reference application modules, (though running on OpenMRS 1.9.8). (You can see specific versions here.) We have a suite of scripted browser tests that are run by CI against our devtest environment after every commit. (You can get a sense of what we're testing from the test names here.) Sometimes we release to production based just on this (plus of course the fact that each individual story we worked on passed testing). For bigger new features, if it's been a while since our last release to production, or we're just worried, we will first deploy to a user test server, and do some extra manual testing. Ideally we'd do this for every release, but we just don't have the resources.

Since the vast majority of reference application code is currently coming from PIH and Bahmni, we aren't very worried about other contributors inadvertently breaking things for us. But as the developer base working on the 2.x line increases, we'll need to keep an eye on whether this keeps being safe.

Also note that any time you deploy a snapshot build of anything into production, you no longer have reproducible builds which makes it harder to investigate any bugs you find. (Given the constraints in most OpenMRS environments, this bad practice is something we just have to live with. But it's important to consciously decide to do this.)

Mitigating risk by testing pre-release code for your implementation

If you're going to be testing pre-releases for production, I suggest that you use Selenium to build up a suite of tests that work against your implementation-specific sample data. (Don't overdo it, since these tests tend to be brittle, and you don't want to spend all your time investigating false-positives, and tweaking them. Think of them as smoke tests.)

What about having OpenMRS do this testing centrally?

Implementations must do their own testing against test data that is realistic for them, because central OpenMRS testing will never cover this. However it would be nice if more functionality was tested centrally by OpenMRS CI servers on every build.

Once you have experience doing selenium tests for your implementation, we'd be happy to have you contribute to the test suite we run against the reference application on every commit. It uses the same underlying technology as Selenium, but you write the tests in Java. Like this. The reference application has only a couple of these tests, but ideally we'd want to have a suite similar to the PIH Mirebalais one.

Other volunteers are also welcome for this!

Automated Testing and Deployment for small implementations?

Correct software development practice is to have a CI server that does this testing automatically. In your case you probably want to do this manually, but perhaps in the future once this is working smoothly you should consider automation. A CI server is fundamentally just a UI to set up scheduled and triggered scripts. This adds a lot of value for devs doing multiple commits per day, but not so much if you're testing a release weekly.

Another piece of good software practice (which is tedious to set up at first, but really nice in the long run) is for your deployment to different environments to be automated, and managed through your CI server.

Currently, few implementations are anywhere near this level of sophistication, but I think it would be great if OpenMRS could work with you to come up with a very simple CI and deployment approach that's appropriate for the middle 80% of implementations.

Okay but where is the actual code?

We use bamboo at ci.openmrs.org to build and test the reference application on any commit to any of its modules. Simplified example: someone commits code to the allergyapi module. This runs the allergyapi tests. If those pass, it triggers the full refapp distro to be (a) built, (b) deployed to devtest01, (c) have the UI tests run against it.

You can always find the latest build of the Reference Application Distribution at https://ci.openmrs.org/browse/REFAPP-OMODDISTRO/latest. Go there and check if the build is green. If so, look for "Shared Artifacts".

One artifact is openmrs-2.x-deb. This is a debian package, suitable for installation on ubuntu. This is what we actually deploy to our devtest, UAT, and demo servers at OpenMRS. One approach for you would be to set up a new Ubuntu server based on one of these .deb files, and to upgrade to the latest build, you just use the dpkg command. This helps you have nearly identical test and prod environments. Issues with this are (1) I don't know what happens if you have additional modules installed when you upgrade the debian package, and (2) we haven't built this debian package for production, e.g. tomcat with ssl, tuned mysql parameters. If this is an approach you want to take, perhaps we can work on improving the deb package and making it an official part of the release.

Alternately, you can get the openmrs-2.x-modules-zip artifact. This is a .zip file that contains all modules and the openmrs core war file. You can upgrade your server by deleting your modules folder, copying in these modules, and restarting tomcat. (Actually you need to do this in a way that doesn't delete additional modules you have installed.) This file is just as big as the .deb file, currently at 88MB which is a big download for many places. At some point we could script a way to package only the snapshot versions, with instructions for which released versions to include, so implementations can only download what they strictly need to.

Note that you should record the build number for any artifacts that you download and put into production. That will mitigate most of the problem of not having reproducible builds, because you can still get a pretty good idea of exactly what code you have deployed by working backwards from this.

This great information came from the Implementer's Mailing List, with special thanks to @Darius Jazayeri, @Vinay Venu and @Wyclif Luyima.

Running UI tests on Travis CI with SauceLabs

Guide is available on Github.