Skip to main content
Version: Cloud

Confluent Cloud Kafka Monitoring

Overview

The Confluent Cloud Kafka plugin is used to monitor the Kafka service provided by Confluent Cloud.

Prerequisites

To collect metrics from Confluent Cloud Kafka, it is necessary to have an IAM Role with sfPoller set up within your cloud environment. Click here to learn more about setting up sfPoller in your cloud environment.

Create a sfPoller Account

  1. Go to the Manage tab of sfPoller and select Add button.

  2. In the Add Cloud Account window, give the following details and save.

    • Account Type: Select Confluent-cloud as account type
    • Name: Give a unique name to the account. Example: kafka-cloud
    • API Key: Enter the API key from your Confluent Cloud account
    • API Secret: Enter the API secret code from your Confluent Cloud account

Configure sfPoller to Collect Metrics

Follow the below step to add endpoints and plugins:

  1. In the Application tab of sfPoller, navigate to your Project > Application.
  2. Click the Application, it will take you to the Endpoint page.
  3. Click the Add Endpoint button, add the following data, and save.
    • Service Type: Confluent-Cloud Service
    • Account Name: kafka-cloud
    • Endpoint Type: Confluent Cloud Kafka
    • Name: Give a unique name
    • Cluster ID: Give the Cluster ID from Confluent Cloud
    • Connector Details: Select +Add button. Add Name and ID of the connector.
  4. In the Plugins window, click the +Add button.
  5. In the Add Plugin window, select the below details and save.
    • Plugin Type: Select Metric as plugin type
    • Plugin: Select confluent-cloud-kafka as plugin
    • Interval: Choose an interval value. The minimum value for the interval is 300
    • Status: By default, the status is Enabled
  6. Click the global Save button in the window's top right corner to save all the changes made so far.

View Confluent Cloud Kafka Metrics

Follow the below steps to view the metrics collected from Aurora DB.

  1. Go to the Application tab in SnappyFlow and navigate to your Project > Application > Dashboard.

  2. You can view the database metrics in the Metrics section.

note

Once plugins are added to sfPoller, they will be automatically detected within the Metrics section. However, if the plugins are not detected, you can import templates to view the corresponding metrics

  1. To access the unprocessed data gathered from the plugins, navigate to the Browse data section and choose the Index: Metric, Instance: Endpoint, Plugin, and Document Type.

Template Details

TemplatePluginDocument TypeDescription
confluent-cloud-kafkaconfluent-cloud-kafkaClusterMetrics, Connector MetricsMonitors kafka service provide by Confluent cloud.

Metric List

Cluster Metrics

MetricDescription
PartitionCountThe number of partitions.
AcitveConnectionCountThe count of active authenticated connections.
SuccessfulAuthenticationCountThe delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds.
RequestCountThe delta count of request received over the network. Each sample is the number of request received since the previous data point. The count sampled every 60 seconds.
BytesInPerSecThe delta count of request received over the network. Each sample is the number of request received since the previous data point. The count sampled every 60 seconds.
BytesOutPerSecThe delta count of total request bytes from the specified request types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
RecordsSentPerSecThe delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
RecordsReceivedPerSecThe delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.
BytesRetainedThe current count of bytes retained by the cluster. The count is sampled every 60 seconds.

Connector Metrics

MetricsDescription
BytesInPerSecThe delta count of total bytes received by the sink connector. Each sample is the number of bytes received since the previous data point. The count is sampled every 60 seconds.
BytesOutPerSecThe delta count of total bytes sent from the transformations and written to kafka for the source connector. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
RecordSentPerSecThe delta count of total number of records sent from the transformation and written to Kafka for the source connector. Each sample is the number of records sent since the previous data. The count is sampled every 60 seconds.
RecordReceivedPerSecThe delta count of total number of records received by the sink connector. Each sample is the number of records received since the previous data point. The count is sampled every 60 seconds.
DeadLetterQueueRecordsThe delta count of dead letter queue records written to kafka for the sink connector. The count is sampled every 60 seconds.