Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...


Info
This is an attempt to reorganize and update Testing and it's child pages. It's still under construction.

Tests in OpenMRS are being run with every commit by ci.openmrs.org and travis-ci.org/openmrsby GitHub Actions.

We aim to start organizing tests in OpenMRS codebase into five categories:

...

  1. Each class should have a corresponding class with unit tests e.g. Concept should have a corresponding ConceptTest class.
  2. If you have a class implementing an interface, then create a test class for the actual implementation.
  3. Classes with unit tests may extend BaseContextMockTest, if they need to mock legacy code calling services with Context.get...Service().
  4. Classes with unit tests must not extend BaseContextSensitiveTest and its subclasses BaseWebContextSensitiveTestBaseModuleContextSensitiveTest, BaseModuleWebContextSensitiveTest.
  5. The test method should start from the tested method name (unit of work) followed by "_should" and the expected behavior e.g. toString_shouldIncludeNameAndDescriptionFields_ifNotBlank. "_if" should be included if the expected behavior depends on some state/condition.
  6. It is considered a good practice to follow //given //when //then pattern in tests:
    (EXAMPLE HERE)
  7. Always assert with assertThat using static import for org.junit.Assert.* (explained here). The use of assertFalse, assertTrue, assertEquals is deprecated and not allowed in new tests.

  8. Prefer implementing FeatureMatcher if you cannot find any suitable matcher in Matchers.*.
  9. Prefer using @Mock annotated test class fields for creating mocks and @InjectMocks for injecting them into tested objects. See BaseContextMockTest.

Component Tests

...


Integration Tests

Integration tests are tests run against an instance of OpenMRS. Currently our integration tests focus on testing the Reference Application user interface.

...


 

The key points are:

  • We are using docker to startup an OpenMRS server on Travis-CI before running tests (fresh instance including database for the whole test suite)
  • We run all tests against two servers in parallel. One using MySQL and the other MariaDB.
  • Database migrations scripts are run when setting up a fresh server instance testing upgrade from OpenMRS Platform XXX (TODO: determine version)
  • Tests are executed by Travis-CI.
  • Saucelabs is used as a client with a browser driven by tests. Saucelabs connects to the server instance running on Travis-CI through a tunnel (no access to the test server from the outside world). Saucelabs records screencasts and takes screenshots when running tests, which can be used for debugging.
  • We test on Firefox 42 and Chrome 48.
  • We run tests in parallel (currently 5 at a time).
 
  • A failing test is executed again twice to verify, if it is a reproducible issue. If the test passes in consecutive runs, it is not failing a build.

When writing UI tests you have to follow guidelines, which are aimed to improve tests stability, readability and maintainability.

...

In case of questions, please write on talk.openmrs.org.

Debugging

If you are notified about a test failure, the following should help you figure out why:

  1. Builds on Travis CI are triggered by https://ci.openmrs.org/browse/REFAPP-OMODDISTRO. The Bamboo build waits for the results from the Travis CI build before proceeding.
  2. Visit https://saucelabs.com/u/openmrs (open the Automated Tests tab) to watch a recording or step by step screenshots to see why a particular test failed.
  3. Open the failing build at https://travis-ci.org/openmrs/openmrs-distro-referenceapplication and see the build logs. We include server logs at the end of each build log, which is also helpful. At times the build log is too long to be displayed in Travis CI, so look for the Raw Log button at the top.
  4. Note that a failing test is executed two more times to confirm it is a reproducible issue. If the test passes in consecutive runs, it is not failing a build. Previous runs, which failed will still appear in Saucelabs as failing.
  5. Finally you can try running UI tests locally against a test server of your choice e.g. a local server instance. Note that UI tests that are run against local servers may not be failing even though they fail on remote servers. It is usually caused by the network latency and indicates a test need to wait before taking an action. The UI test framework, if used as outlined above, prevents such situations in most cases.

Performance Tests

To be addressed...

Manual Tests

...