Presented by:

Team mat

Matvey Arye

Timescale

Mat has been working on data infrastructure in both academia (Princeton, PhD) and industry. As one of TimescaleDB's core architects he works on performance, scalability, and query power. Previously, he attended Stuyvesant, The Cooper Union, and Princeton.

No video of the event yet, sorry!

Grafana and Prometheus have become a popular duo for collecting, querying and graphing metrics, giving teams greater clarity on their operations. But while Prometheus has its own time-series storage subsystem specifically for metrics monitoring, many have found they require something for long-term, persistent storage that also allows more complex queries to be run across a larger dataset.

In this talk, we take a somewhat heretical stance in the monitoring world, and describe why and how we chose to use PostgreSQL as a Prometheus backend to support those complex questions (and get a full SQL interface). Until recently, the databases supported by Grafana have been NoSQL systems, offering SQL-like or custom query languages that were limited in scope (i.e., in comparison to ANSI SQL), and designed for specific data model and architectures in mind. Yet with the recent addition of the MySQL and PostgreSQL data sources, full SQL is now available to Grafana users. We will also describe our work with TimescaleDB, an open-source time-series database optimized for scalable data ingest and complex query performance, that enables PostgreSQL to scale for classic monitoring volumes.

We present pg_prometheus, a custom Prometheus datatype, as well as prometheus-postgresql-adapter, a remote storage adaptor for PostgreSQL, while showcasing the PostgreSQL datasource, which enables TimescaleDB to be leveraged by Grafana users. We'll discuss the power of SQL for visualizing data using a variety of examples -- e.g., more complex queries involving multiple WHERE predicates, sub-queries, limits, etc. We'll show why these complex queries are often useful, and necessary, to effectively act on the data being collected, and how they can be enabled to run even across billions of rows of data.

Date:
Duration:
50 min
Room:
Conference:
PostgresConf US 2018
Language:
English
Track:
Use Cases
Difficulty:
Medium