...
Filter the observation resource by lastUpdated to view the newly created observation for patient Jane Doe on the Hapi FHIR server.
...
GETTING STARTED: REMOTE SETUP
In this section, we shall demonstrate how to connect via secure shell to an instance SIGDEP3 and start up the project remotely. A data management system Apache Hive is also used to display flattened data that can be used for visualization.
To get started, the following prerequisites are required for the remote setup;
Database management system
Setup SSH connection to the remote instance of OpenMRS SIGDEP3
...
3.
Observation resource
...
Observation resource
...
Observation resource
GETTING STARTED: REMOTE SETUP
In this section, we shall demonstrate how to connect via secure shell to an instance SIGDEP3 and start up the project remotely. A data management system Apache Hive is also used to display flattened data that can be used for visualization.
To get started, the following prerequisites are required for the remote setup;
Database management system
Setup SSH connection to the remote instance of OpenMRS SIGDEP3
SSH CONNECTION
To begin with, a secure shell is established to the OpenMRS instance server test1.chis.org. Therefore an implementer is required to set up a public key for SSH connection to the instance. The illustration below shows the private and public SSH keys. The command cat id_rsa.pub will display the public SSH key. The key is used to authenticate on the remote instance.
...
Accessing SSH key
Upon successful setup of the public SSH on the remote instance. The instance is accessed on the Visual Studio Code. Click to open a remote window.
...
Click on the connection
Click on Connect to Host
...
Click connect host
Click on the instance to connect.
...
Click on instance
Upon successful connection to the instance. Click on a new terminal.
...
Click on terminal
HOW TO STARTUP THE SIGDEP3 EMR REMOTELY
Execute the command docker ps to list the docker containers.
...
Run docker ps
Note: Notice the port on which the SPARK database should be exposed
...
Apache Spark dbms forwarding port
Change directory into the project by executing the command; cd code/SIGDEP-3-Docker-Setup
...
Change directory
Startup the project by executing the command; docker compose -f docker-compose-remote-traefick.yml up
...
Startup project
The project will be started successfully.
...
Project started successfully
Expose the apache spark db port 10001 by clicking on the port tab
...
Expose the apache dbms port
Click on Forward a port and add the port.The port will be forwarded successfully.
...
Click on add port
Note: The docker-compose-remote-traefik.yml file has the configurations for the networks. The illustration below shows the pipeline controller for the remote instance.
...
Services: pipeline controller
The configurations for the SPARK database used for visualization of data is displayed as shown
...
Services: Spark
HOW TO DISPLAY DATA ON APACHE HIVE DATABASE MANAGEMENT SYSTEM
In this section the Apache Hive database management system will be set up. The dbms will pull flat data that will be used for visualization. This guide provides steps on installation of the dbms.
Upon installation of the dbms click to startup
...
Startup DBeaver
Note: you might be required to download SQLite driver files
Click on New database connection
...
Click on new database connection
Enter the localhost and the port number as exposed on the remote instance forwarded ports. Click on the finish button.
...
Enter host, port and click finish
Upon successful connection click on the default drop-down menu
...
Connection successful
To view data, click on the default menu item then click on tables. The tables are displayed.
...
Click on tables under defaults
Run an incremental within the data pipeline.
...
To view the flattened tables from patient resources, click on View. In this illustration patient_flat_2 is clicked. Notice the patient created under the local instance is displayed.
View flattened data: patient resource