Timescale Cloud automates backup management. All backups are encrypted with service specific keys and point-in-time recovery is supported to allow recovering the system to any point in time within the backup window. Timescale Cloud stores the backups to closest available cloud storage.

Creating pg_dump "raw" backups

In case you wish to store an additional set of backups, you can do this easily using standard PostgreSQL tooling. The pg_dump command allows you to create backups that can be directly restored elsewhere if need be. Typical parameters for the command include these:

pg_dump '<service_url_from_portal>' -f <target_file/dir> -j <number_of_jobs> -F <backup_format>

The pg_dump command can be run also against one of the standby nodes if such exist. Simply using the replica URI from Timescale Cloud web console is enough. 

For example, to create a backup in directory  format (which can be used directly with pg_restore ) using two concurrent jobs and storing the results to a directory called backup one would run a command like this:

pg_dump 'postgres://tsdbadmin:[email protected]:26882/defaultdb?sslmode=require' -f backup -j 2 -F directory

You could then for example further put all files to single tar file and upload to S3:

export BACKUP_NAME=backup-date -I.tartar -cf $BACKUP_NAME backup/s3cmd put $BACKUP_NAME s3://pg-backups/$BACKUP_NAME
Did this answer your question?