Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info
This is an attempt to reorganize and update Testing and it's child pages. It's still under construction.

...

 

The key points are:

  • We are using docker to startup an OpenMRS server on Travis-CI before running tests (fresh instance including database for the whole test suite)
  • We run all tests against two servers in parallel. One using MySQL and the other MariaDB.
  • Database migrations scripts are run when setting up a fresh server instance testing upgrade from OpenMRS Platform XXX (TODO: determine version)
  • Tests are executed by Travis-CI.
  • Saucelabs is used as a client with a browser driven by tests. Saucelabs connects to the server instance running on Travis-CI through a tunnel (no access to the test server from the outside world). Saucelabs records screencasts and takes screenshots when running tests, which can be used for debugging.
  • We test on Firefox 42 and Chrome 48.
  • We run tests in parallel (currently 5 at a time).
  • A failing test is executed again twice to verify, if it is a reproducible issue. If the test passes in consecutive runs, it is not failing a build.

...

  1. Builds on Travis CI are triggered by https://ci.openmrs.org/browse/REFAPP-OMODDISTRO. The Bamboo build waits for the results from the Travis CI build before proceeding.
  2. Visit https://saucelabs.com/u/openmrs (open the Automated Tests tab) to watch a recording or step by step screenshots to see why a particular test failed.
  3. Open the failing build at https://travis-ci.org/openmrs/openmrs-distro-referenceapplication and see the build logs. We include server logs at the end of each build log, which is also helpful at times. At times the build log is too long to be displayed in Travis CI, so look for the Raw Log button at the top.
  4. Note that a failing test is executed two more times to confirm it is a reproducible issue. If the test passes in consecutive runs, it is not failing a build. Previous runs, which failed will still appear in Saucelabs as failing.
  5. Finally you can try running UI tests locally against a test server of your choice e.g. run locallya local server instance. Note that UI tests that are run against local servers may not be failing even though they fail on remote servers. It is usually caused by the network latency and indicates a test need to wait before taking an action. The UI test framework, if used as outlined above, prevents such situations in most cases.

...