Skip to main content
How-To

Best Practices Operating Data Virtuality on Linux


When installing Data Virtuality in your own data center or cloud environment, there are a few best practices to adopt in order to ensure a smooth operation of Data Virtuality. This article gives an overview of those best practices.

 

Backing up the Data Virtuality Configuration

 

In order to be prepared for potential disaster recovery, or for partially rolling back to an older state, please set up a regular backup of the internal configuration database.

To automate the backup, first create a target folder:

 

$ mkdir -p /opt/datavirtuality/backups/dv$ chown -R datavirtuality:datavirtuality /opt/datavirtuality/backups

 

Now create a new file called /opt/datavirtuality/run_dv_backup.sh with the following content (adapt the password parameter accordingly):

 

#!/bin/bashcd /opt/datavirtuality/dvserver/bin/cli-export-1.0today=`date '+%Y_%m_%d-%H_%M_%S'`filename="/opt/datavirtuality/backups/dv/export-$today.sql"./export.sh --username admin --password admin --host localhost --file "$filename"gzip "$filename"

 

Ensure the file is executable for the datavirtuality user:

 

$ chown datavirtuality:datavirtuality /opt/datavirtuality/run_dv_backup.sh$ chmod +x /opt/datavirtuality/run_dv_backup.sh

 

You can now either set up a cron job to generate the backup on a regular basis, or schedule a SQL script from within Data Virtuality:

 

CALL "SYSADMIN.execExternalProcess"(    "command" => '/opt/datavirtuality/run_dv_backup.sh')

 

When running the script via a job in Data Virtuality, you may want to enable a notification for the job to be alerted in case the backup generation fails.

 

Important: In this example, we are storing the backup on the same server that also runs Data Virtuality. If this server crashes, both your Data Virtuality installation and backups will be lost. It is advisable to either regularly backup the file system of the server, or to adapt the script above to use a network share or cloud storage as a target directory.

 

Purging Old Logs/Backups

 

Regularly generated backups, as well as log files, will over time increasingly occupy disc storage. To avoid running out of free disk space eventually, logs and backups should be purged.

To do this, create a new script called /opt/datavirtuality/purge_backups_logs.sh with the following content:

 

#!/bin/bash# remove backups older than 20 daysfind /opt/datavirtuality/backups -atime +20 -exec rm -f {} \;# remove server.log files older than 30 daysfind /opt/datavirtuality/dvserver/standalone/log/server.log* -atime +30 -exec rm -f {} \;

 

Make sure the file is executable for the datavirtuality user:

 

$ chown datavirtuality:datavirtuality /opt/datavirtuality/purge_backups_logs.sh$ chmod +x /opt/datavirtuality/purge_backups_logs.sh

 

This script can be scheduled as a cron job or be executed directly via SQL from Data Virtuality:

 

CALL "SYSADMIN.execExternalProcess"(    "command" => '/opt/datavirtuality/purge_backups_logs.sh')

0 replies

Be the first to reply!

Reply