This article goes into a bit more detail beyond the basic functions available for backing up and restoring your database.

As a reminder, you can do a point-in-time-recovery (PITR) using the "Restore" button for each service which you want to restore from a backup. The backups are taken automatically by Timescale Cloud, and retained for a number of days which depends on your plan type.

Creating and restoring using the 'timescaledb-backup' tool

In case you wish to create and restore from an additional set of backups, we recommend utilizing our tool, timescaledb-backup, described in more detail here.

timescaledb-backup is a program for dumping and restoring a TimescaleDB database in a simpler, less error-prone, and more performant way. In particular, the current use of PostgreSQL tools pg_dump and pg_restore has several limitations when applied to TimescaleDB:

  1. The PostgreSQL backup/restore tools do not support backup/restore across versions of extensions. So that if you take a backup from (say) TimescaleDB v1.7.1, you need to restore to a database version that is also running TimescaleDB v1.7.1, and then manually upgrade TimescaleDB to a later version.
  2. The backup/restore tools do not track which version of TimescaleDB is in the backup, so a developer needs to maintain additional external information to ensure the proper restore process.
  3. Users need to take manual steps to run pre- and post-restore hooks (database functions) in TimescaleDB to ensure correct behavior. Failure to execute these hooks can prevent restores from functioning correctly.
  4. The restore process cannot easily perform parallel restoration for greater speed/efficiency.

Creating and restoring using pg_dump "raw" backups

For an alternative approach, you can also use standard PostgreSQL tooling. The pg_dump command allows you to create backups that can be directly restored elsewhere if need be. Typical parameters for the command include these:

pg_dump '<service_url_from_portal>' -f <target_file/dir> -j <number_of_jobs> -F <backup_format>

The pg_dump command can be run also against one of the standby nodes if such exist. Simply using the replica URI from Timescale Cloud web console is enough. 

For example, to create a backup in directory  format (which can be used directly with pg_restore ) using two concurrent jobs and storing the results to a directory called backup one would run a command like this:

pg_dump 'postgres://tsdbadmin:[email protected]:26882/defaultdb?sslmode=require' -f backup -j 2 -F directory

You could then for example further put all files to single tar file and upload to S3:

export BACKUP_NAME=backup-date -I.tartar -cf $BACKUP_NAME backup/s3cmd put $BACKUP_NAME s3://pg-backups/$BACKUP_NAME
Did this answer your question?