Monitor Clickhouse DB on Instance
Clickhouse database (DB) on instances is monitored using sfAgent configured with Clickhouse plugin.
The Clickhouse plugin has been tested on followings versions and OS.
Clickhouse version - 188.8.131.52
OS - Ubuntu 20.04.1
To get started with Clickhouse integration, create a user and grant permissions to collect data from your Clickhouse DB.
Create a read-only user with the following code in the user section of
<!—create user in <users> -->
<!--set profile to readonly -->
- Save and restart the server.
Add the below-mentioned configuration to the
config.yaml which is located at the following path
/opt/sfagent/ directory .
- name: clickhouse
View Database Metrics
Go to the Application tab in SnappyFlow and navigate to your Project > Application > Dashboard.
In the dashboard window, Clickhouse DB metrics are displayed in the Metric section.note
Once the Clickhouse configuration settings are done, the Clickhouse plugin will be automatically detected within the Metrics section.
To access the unprocessed data gathered from the plugins, navigate to the Browse data section and choose the
|Number of times the lock of Context was acquired or tried to acquire. This is global lock
|Number of compressed blocks from compressed sources (files, network)
|Number of uncompressed bytes read from compressed sources (files, network)
|Total time spent waiting for read syscall. This include reads from page cache
|Total time spent waiting for write syscall. This include writes to page cache
|Total number of distributed connection failures that have occurred on the ClickHouse DB server
|Bytes written to filesystem for data INSERTed to MergeTree tables
|Number of query processing threads
|Time from server starting (in seconds)
|Number of files opened
|Number of ZooKeeper user exceptions
|Number of requests to ZooKeeper in fly
|Number of active tasks in BackgroundDistributedSchedulePool. This pool is used for distributed sends that is done in background
|Number of active tasks in BackgroundProcessingPool for moves
|Number of active tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc.
|Number of threads in global thread pool running a task
|Number of threads in global thread pool
|Number of threads in local thread pools running a task
|Number of threads in local thread pools. The threads in local thread pools are taken from the global thread pool
|Number of connections to HTTP server
|Maximum number of parts per partition across all partitions of all tables of MergeTree family. Values larger than 300 indicates misconfiguration, overload, or massive data loading.
|Total time spent waiting for ZooKeeper in microseconds
|Number of mutations (ALTER DELETE/UPDATE)
|Maximum number of INSERT operations in the queue (still to be replicated) across Replicated tables
|Maximum number of merge operations in the queue (still to be applied) across Replicated tables
|Sum of INSERT operations in the queue (still to be replicated) across Replicated tables.
|Sum of merge operations in the queue (still to be applied) across Replicated tables.
|An internal metric of the low-level memory allocator (jemalloc)
|Number of Databases available in server
|Total number of tables in database
|Number of Replicated tables that are currently in readonly state due to re-initialization after ZooKeeper session loss or due to startup without ZooKeeper configured
|Number of bytes allocated for memory Arena (used for GROUP BY and similar operations)
|The number of hard page faults in query execution threads. High values indicate either that you forgot to turn off swap on your server, or eviction of memory pages of the ClickHouse binary during very high memory pressure, or successful usage of the 'mmap' read method for the tables data
|Total time spent in processing (queries and other tasks) threads executing CPU instructions in user space. This include time CPU pipeline was stalled due to cache misses, branch mispredictions, hyper-threading, etc.
|Total transactions in ZooKeeperRequest, ZooKeeperWatch, ZooKeeperSession. ZooKeeperRequest - Number of sessions (connections) to ZooKeeper. ZooKeeperWatch - Number of watches (event subscriptions) in ZooKeeper. ZooKeeperSession - Number of requests to ZooKeeper in fly.
|Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table.
|Total time spent in processing (queries and other tasks) threads.
|Longest running query (in ms)
|Number of queries to be interpreted and potentially executed. May include internal queries initiated by ClickHouse itself. Does not count subqueries
|Same as Query, but only for SELECT queries.
|Uncompressed bytes (for columns as they stored in memory) that was read for background merges. This is the number before merge.
|Number of launched background merges.
|Rows read for background merges. This is the number of rows before merge.
|Total time spent for background merges
|Number of DNS errors
|Number of insert Query Performed.
|Number of failed queries.
|Number of failed select queries.
|Number of rows INSERTed to all tables.
|Number of bytes (uncompressed; for columns as they stored in memory) INSERTed to all tables.
|Number of rows INSERTed to MergeTree tables.
|Number of threads waiting for lock in Context. This is global lock.
|Number of rows in all table parts
|Number of table partitions
|Size of all table parts in bytes
|Number of table parts
|Total time a thread was ready for execution but waiting to be scheduled by OS, from the OS point of view.
|Total time spent waiting for I/O in microseconds.
|Number of bytes written to disks or block devices.
|The query that caused the exception.
|The duration of the query that caused the exception.
Slow Query Details
|The query that took more than 1000 ms to execute.
|The duration of the slow query.