Space partitions: The use of additional partitioning is a very specialized use case. Most users will not need to use it.

Space partitions use hashing: Every distinct item is hashed to one of N buckets. Remember that we are already using (flexible) time intervals to manage chunk sizes; the main purpose of space partitioning is to enable parallel I/O to the same time interval.

Parallel I/O can benefit in two scenarios: (a) two or more concurrent queries should be able to read from different disks in parallel, or (b) a single query should be able to use query parallelization to read from multiple disks in parallel.

Note that query parallelization in PostgreSQL 9.6 (and 10) does not support querying different hypertable chunks in parallel; query parallelization only works on a single physical table (and thus a single chunk). We might add our own support for this, but it is not currently supported.

Thus, users looking for parallel I/O have two options:

  1. Use a RAID setup across multiple physical disks, and expose a single logical disk to the hypertable (i.e., via a single tablespace).
  2. For each physical disk, add a separate tablespace to the database. TimescaleDB allows you to actually add multiple tablespaces to a single hypertable (although under the covers, each underlying chunk will be mapped by TimescaleDB to a single tablespace / physical disk).

We recommend a RAID setup when possible, as it supports both forms of parallelization described above (i.e., separate queries to separate disks, single query to multiple disks in parallel). The multiple tablespace approach only supports the former. With a RAID setup, no spatial partitioning is required.

That said, when using space partitions, we recommend using 1 space partition per disk.

TimescaleDB does not benefit from a very large number of space partitions (such as the number of unique items you expect in partition field). A very large number of such partitions leads both to poorer per-partition load balancing (the mapping of items to partitions using hashing), as well as much increased planning latency for some types of queries.

Did this answer your question?