main

StreamsHub Console

main

1. StreamsHub Console overview

The StreamsHub Console provides a web interface for administering Kafka clusters. It delivers real-time insights to help you monitor, manage, and troubleshoot cluster performance.

After connecting a Kafka cluster managed by Strimzi, you can monitor cluster health and resources, inspect Kafka Connect components, and check users and groups associated with the cluster.

2. Deploying the console

Deploy the console using the dedicated operator. After installing the operator, you can create instances of the console.

For each console instance, the operator needs a Prometheus instance to collect and display Kafka cluster metrics. You can configure the console to use an existing Prometheus source. If no source is set, the operator creates a private Prometheus instance when the console is deployed. However, this default setup is not recommended for production and should only be used for development or evaluation purposes.

Connect the console to one or more Kafka clusters to provide visibility into topics, Kafka nodes, and consumer groups.

Configure the console to integrate with related services, including:

  • Authentication providers for securing access to Kafka clusters

  • Kafka Connect clusters for viewing connector and configuration details

  • Metrics providers for monitoring Kafka cluster performance

  • Schema registries for validating and decoding messages using data schemas

Define these integrations in the Console custom resource configuration YAML file.

2.1. Deployment prerequisites

To deploy the console, you need the following:

  • A Kubernetes 1.25 cluster.

  • The kubectl command-line tool is installed and configured to connect to the Kubernetes cluster.

  • Access to the Kubernetes cluster using an account with cluster-admin permissions, such as system-admin.

  • A Kafka cluster managed by Strimzi, running on the Kubernetes cluster.

Example files are provided for installing a Kafka cluster managed by Strimzi, along with a Kafka user representing the console. These files offer the fastest way to set up and try the console, but you can also use your own Strimzi managed Kafka deployment.

2.1.1. Using your own Kafka cluster

If you use your own Strimzi deployment, verify the configuration by comparing it with the example deployment files provided with the console.

For each Kafka cluster, the Kafka resource used to install the cluster must be configured with the following:

  • Sufficient authorization for the console to connect

  • Metrics properties for the console to be able to display certain data

    The metrics configuration must match the properties specified in the example Kafka (console-kafka) and ConfigMap (console-kafka-metrics) resources.

2.1.2. Deploying a new Kafka cluster

If you already have Strimzi installed but want to create a new Kafka cluster for use with the console, example deployment resources are available to help you get started.

These resources create the following:

  • A Kafka cluster in KRaft mode with SCRAM-SHA-512 authentication.

  • A Strimzi KafkaNodePool resource to manage the cluster nodes.

  • A KafkaUser resource to enable authenticated and authorized console connections to the Kafka cluster.

The KafkaUser custom resource in the 040-KafkaUser-console-kafka-user1.yaml file includes the necessary ACL types to provide authorized access for the console to the Kafka cluster.

The minimum required ACL rules are configured as follows:

  • Describe, DescribeConfigs permissions for the cluster resource

  • Read, Describe, DescribeConfigs permissions for all topic resources

  • Read, Describe permissions for all group resources

Note
To ensure the console has the necessary access to function, a minimum level of authorization must be configured for the principal used in each Kafka cluster connection. The specific permissions may vary based on the authorization framework in use, such as ACLs, Keycloak authorization, OPA, or a custom solution.

When configuring the KafkaUser authentication and authorization, ensure they match the corresponding Kafka configuration:

  • KafkaUser.spec.authentication should match Kafka.spec.kafka.listeners[*].authentication.

  • KafkaUser.spec.authorization should match Kafka.spec.kafka.authorization.

Prerequisites
  • A Kubernetes 1.25 cluster.

  • Access to the Kubernetes web console using an account with cluster-admin permissions, such as system:admin.

  • The kubectl command-line tool is installed and configured to connect to the Kubernetes cluster.

Procedure
  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the release page.

    The artifacts provide the deployment YAML files to the install the Kafka cluster. Use the sample installation files located in examples/kafka.

  2. Set environment variables to update the installation files:

    export NAMESPACE=kafka
    export LISTENER_TYPE=route
    export CLUSTER_DOMAIN=<domain_name>
    • NAMESPACE specifies the namespace where the Kafka operator is deployed.

    • LISTENER_TYPE specifies the listener type used to expose Kafka to the console.

    • CLUSTER_DOMAIN specifies the cluster domain name of the Kubernetes cluster.

      In this example, the namespace variable is defined as kafka and the listener type is route.

  3. Install the Kafka cluster.

    Run the following command to apply the YAML files and deploy the Kafka cluster to the defined namespace:

    cat examples/kafka/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -

    This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified Kubernetes namespace.

  4. Check the status of the deployment:

    oc get pods -n kafka

    The output shows the operators and cluster readiness:

    NAME                              READY   STATUS   RESTARTS
    strimzi-cluster-operator          1/1     Running  0
    console-kafka-console-nodepool-0  1/1     Running  0
    console-kafka-console-nodepool-1  1/1     Running  0
    console-kafka-console-nodepool-2  1/1     Running  0
    • console-kafka is the name of the cluster.

    • console-nodepool is the name of the node pool.

      A node ID identifies the nodes created.

      With the default deployment, you install three nodes.

      READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

2.2. Installing the console operator

Install the console operator using one of the following methods:

  • Using the Operator Lifecycle Manager (OLM) command line interface (CLI)

  • From the OperatorHub in the OpenShift web console (Openshift clusters only)

  • By applying the Console RBAC, deployment, and Custom Resource Definition (CRD) resources

Note
OLM and OperatorHub install options will become available after the operator is submitted to and approved for the OperatorHub. See Issue 1526 for tracking progress.

The recommended approach is to install the operator via the Kubernetes CLI (kubectl) using the Operator Lifecycle Manager (OLM) resources. If using the OLM is not suitable for your environment, you can install the operator by applying the CRD directly.

2.2.1. Deploying the console operator using a CRD

This procedure describes how to install the StreamsHub Console operator using a Custom Resource Definition (CRD).

Procedure
  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the release page.

    The artifacts include a Custom Resource Definition (CRD) file (console-operator.yaml ) to install the operator without the OLM.

  2. Set an environment variable to define the namespace where you want to install the operator:

    export NAMESPACE=operator-namespace

    In this example, the namespace variable is defined as operator-namespace.

  3. Install the console operator with the CRD.

    Use the sample installation files located in install/console-operator/non-olm. These resources install the operator with cluster-wide scope, allowing it to manage console resources across all namespaces. Run the following command to apply the YAML file:

    cat install/console-operator/non-olm/console-operator.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -

    This command reads the YAML file, replaces the namespace environment variables, and applies the resulting configuration to the specified Kubernetes namespace.

  4. Check the status of the deployment:

    oc get pods -n operator-namespace
    Output shows the deployment name and readiness
    NAME              READY  STATUS   RESTARTS
    console-operator  1/1    Running  1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

  5. Use the console operator to deploy the console and connect to a Kafka cluster.

2.3. Deploying and connecting the console to a Kafka cluster

Use the console operator to deploy the StreamsHub Console to the same Kubernetes cluster as a Kafka cluster managed by Strimzi. Use the console to connect to the Kafka cluster.

Prerequisites
Procedure
  1. Create a Console custom resource in the desired namespace.

    If you deployed the example Kafka cluster provided with the installation artifacts, you can use the configuration specified in the examples/console/010-Console-example.yaml configuration file unchanged.

    Otherwise, configure the resource to connect to your Kafka cluster.

    Example console configuration
    apiVersion: console.streamshub.github.com/v1alpha1
    kind: Console
    metadata:
      name: my-console
    spec:
      hostname: my-console.<cluster_domain>
      kafkaClusters:
        - name: console-kafka
          namespace: kafka
          listener: secure
          properties:
            values: []
            valuesFrom: []
          credentials:
            kafkaUser:
              name: console-kafka-user1
    • hostname defines the hostname used to access the console over HTTP.

    • kafkaClusters.name defines the name of the Kafka resource that represents the cluster.

    • kafkaClusters.namespace specifies the namespace where the Kafka cluster is deployed.

    • kafkaClusters.listener specifies the listener used to expose the Kafka cluster for console connections.

    • kafkaClusters.properties.values optionally defines additional connection properties.

    • kafkaClusters.properties.valuesFrom optionally specifies ConfigMaps or Secrets that provide connection properties.

    • kafkaClusters.credentials.kafkaUser.name optionally specifies the Kafka user used for authenticated access to the Kafka cluster.

  2. Apply the Console configuration to install the console.

    In this example, the console is deployed to the console-namespace namespace:

    kubectl apply -f examples/console/010-Console-example.yaml -n console-namespace
  3. Check the status of the deployment:

    oc get pods -n console-namespace
    Output shows the deployment name and readiness
    NAME           READY  STATUS  RUNNING
    console-kafka  1/1    1       1
  4. Access the console.

    When the console is running, use the hostname specified in the Console resource (spec.hostname) to access the user interface.

2.3.1. Using an OIDC provider to secure access to Kafka clusters

Enable secure console connections to Kafka clusters using an OIDC provider. Configure the console deployment to configure connections to any Identity Provider (IdP), such as Keycloak or Dex, that supports OpenID Connect (OIDC). Also define the subjects and roles for user authorization. To use group-based authorization as shown in the examples, configure an OIDC provider that includes a group membership claim, such as groups, in the generated access tokens. The security profiles can be configured for all Kafka cluster connections on a global level, though you can add roles and rules for specific Kafka clusters.

An example configuration is provided in the following file: examples/console/console-security-oidc.yaml.

The configuration introduces the following additional properties for console deployment:

security

Properties that define the connection details for the console to connect with the OIDC provider.

subjects

Specifies the subjects (users or groups) and their roles in terms of JWT claims or explicit subject names, determining access permissions.

roles

Defines the roles and associated access rules for users, specifying which resources (like Kafka clusters) they can interact with and what operations they are permitted to perform.

Example security configuration for all clusters
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  security:
    oidc:
      authServerUrl: <OIDC_discovery_URL>
      clientId: <client_id>
      clientSecret:
        valueFrom:
          secretKeyRef:
            name: my-oidc-secret
            key: client-secret
      trustStore:
        type: JKS
        content:
          valueFrom:
            configMapKeyRef:
              name: my-oidc-configmap
              key: ca.jks
        password:
          value: truststore-password
    subjects:
      - claim: groups
        include:
          - <team_name_1>
          - <team_name_2>
        roleNames:
          - developers
      - claim: groups
        include:
          - <team_name_3>
        roleNames:
          - administrators
      - include:
          - <user_1>
          - <user_2>
        roleNames:
          - administrators
    roles:
      - name: developers
        rules:
          - resources:
              - kafkas
            resourceNames:
              - dev-cluster-a
              - com.example.team.*
              - /qa-cluster-[xy]/
            privileges:
              - 'ALL'
      - name: administrators
        rules:
          - resources:
              - kafkas
            privileges:
              - 'ALL'
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      credentials:
        kafkaUser:
          name: console-kafka-user1
  • security.oidc.authServerUrl specifies the OIDC provider issuer URI used for endpoint discovery.

  • security.oidc.clientId specifies the client ID that identifies the console to the OIDC provider.

  • security.oidc.clientSecret specifies the client secret used to authenticate the console with the OIDC provider.

  • security.oidc.clientSecret.valueFrom.secretKeyRef.name specifies the name of the Kubernetes Secret that stores the client secret.

  • security.oidc.clientSecret.valueFrom.secretKeyRef.key specifies the key in the Secret that contains the client secret value.

  • security.oidc.trustStore optionally defines a truststore used to validate the OIDC provider TLS certificate. Supported formats include JKS, PEM, and PKCS12. Truststore content can be provided by using either a ConfigMap or a Secret.

  • security.oidc.trustStore.password optionally specifies the truststore password. The password can be provided as a plaintext value or by reference to a Secret. Plaintext values are not recommended for production environments.

  • security.subjects defines how users or groups are mapped to roles.

    • security.subjects.claim specifies the JWT claim used to identify users or groups.

    • security.subjects.include specifies the users or groups included under the configured claim.

    • security.subjects.roleNames specifies the roles assigned to the identified users or groups.

    • When no claim is specified, security.subjects.include specifies individual users by name.

  • security.roles defines the roles available for authorization.

    • security.roles.rules.resources specifies the resource types that the role can access.

    • security.roles.rules.resourceNames specifies which resources the role can access.

      • Exact names can be specified, for example dev-cluster-a.

      • Wildcard patterns can be specified, for example com.example.team.*.

      • Regular expressions can be specified by enclosing the value in slashes, for example /qa-cluster-[xy]/.

    • security.roles.rules.privileges specifies the privileges granted to the role for the selected resources.

If you want to specify roles and rules for individual Kafka clusters, add the details under kafka.clusters[].security.roles[]. In the following example, the console-kafka cluster allows developers to list and view selected Kafka resources. Administrators can also update certain resources.

Example security configuration for an individual cluster
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      credentials:
        kafkaUser:
          name: console-kafka-user1
      security:
        roles:
          - name: developers
            rules:
              - resources:
                  - topics
                  - topics/records
                  - consumerGroups
                  - rebalances
                privileges:
                  - GET
                  - LIST
          - name: administrators
            rules:
              - resources:
                  - topics
                  - topics/records
                  - consumerGroups
                  - rebalances
                  - nodes/configs
                privileges:
                  - GET
                  - LIST
              - resources:
                  - consumerGroups
                  - rebalances
                privileges:
                  - UPDATE
Optional OIDC authentication properties

The following properties can be used to further configure oidc authentication. These apply to any part of the console configuration that supports authentication.oidc, such as schema registries or metrics providers.

grantType

Specifies the OIDC grant type to use. Required when using non-interactive authentication flows, where no user login is involved. Supported values:

  • CLIENT: Requires a client ID and secret.

  • PASSWORD: Requires a client ID and secret, plus user credentials (username and password) provided through grantOptions.

grantOptions

Additional parameters specific to the selected grant type. Use grantOptions to provide properties such as username and password when using the PASSWORD grant type.

oidc:
  grantOptions:
    username: my-user
    password: <my_password>
method

Method for passing the client ID and secret to the OIDC provider. Supported values:

  • BASIC: (default) Uses HTTP Basic authentication.

  • POST: Sends credentials as form parameters.

scopes

Optional list of access token scopes to request from the OIDC provider. Defaults are usually defined by the OIDC client configuration. Specify this property if access to the target service requires additional or alternative scopes not granted by default.

oidc:
  scopes:
    - openid
    - registry:read
    - registry:write
absoluteExpiresIn

Optional boolean. If set to true, the expires_in token property is treated as an absolute timestamp instead of a duration.

2.3.2. Adding Kafka Connect clusters

Integrate Kafka Connect clusters with the console to view available connectors and their configurations. You can associate one or more Kafka Connect clusters with one or more Kafka clusters that are already defined in the console configuration. The console displays Connect cluster and connector information but does not allow modification.

A placeholder for adding Connect clusters is provided in: examples/console/010-Console-example.yaml.

You can define Connect clusters globally as part of the console configuration using the kafkaConnectClusters property.

kafkaConnectClusters

Defines one or more Kafka Connect clusters that the console can connect to in order to retrieve connector information. Each entry includes a name, an endpoint url, and a list of Kafka clusters with which the Connect cluster is associated.

kafkaClusters

Lists the Kafka clusters associated with the Kafka Connect cluster. Each Kafka cluster is referenced by its <namespace>/<name> combination as defined in the kafkaClusters configuration. For standalone Kafka clusters without a namespace, specify only the cluster name.

Example Kafka Connect cluster configuration
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  kafkaConnectClusters:
    - name: my-connect-cluster
      url: http://my-connect-cluster.example.com/
      kafkaClusters:
        - kafka/console-kafka
    - name: my-mm2-cluster
      url: http://my-mm2-cluster.example.com/
      kafkaClusters:
        - ns1/kafka1
        - ns2/kafka2
  # ...
  • kafkaConnectClusters.name defines a unique name for the Kafka Connect cluster.

  • kafkaConnectClusters.url specifies the base URL of the Kafka Connect REST API endpoint.

  • kafkaConnectClusters.kafkaClusters associates the Kafka Connect cluster with one or more Kafka clusters configured in the console.

  • The second kafkaConnectClusters entry shows an example MirrorMaker 2 Connect cluster.

  • The second kafkaConnectClusters.kafkaClusters entry shows how to associate a Kafka Connect cluster with multiple Kafka clusters in different namespaces.

When the console is deployed with these settings, Connect clusters and connector details can be viewed from the Kafka Connect page of the console.

2.3.3. Enabling a metrics provider

Configure the console deployment to enable a metrics provider. You can set up configuration to use one of the following sources to scrape metrics from Kafka clusters using Prometheus:

OpenShift’s built-in user workload monitoring

Use OpenShift’s workload monitoring, incorporating the Prometheus operator, to monitor console services and workloads without the need for an additional monitoring solution.

A standalone Prometheus instance

Provide the details and credentials to connect with your own Prometheus instance.

An embedded Prometheus instance (default)

Deploy a private Prometheus instance for use only by the console instance. The instance is configured to retrieve metrics from all Strimzi managed Kafka instances in the same Kubernetes cluster. Using embedded metrics is intended for development and evaluation only, not production.

Example configuration for OpenShift monitoring and a standalone Prometheus instance is provided in the following files:

  • examples/console/console-openshift-metrics.yaml

  • examples/console/console-standalone-prometheus.yaml

You can define Prometheus sources globally as part of the console configuration using metricsSources properties:

metricsSources

Declares one or more metrics providers that the console can use to collect metrics.

type

Specifies the type of metrics source. Valid options:

  • openshift-monitoring

  • standalone (external Prometheus)

  • embedded (console-managed Prometheus)

url

For standalone sources, specifies the base URL of the Prometheus instance.

authentication

(For standalone and openshift-monitoring only) Configures access to the metrics provider using basic, bearer token, or oidc authentication.

trustStore

(Optional, for standalone only) Specifies a truststore for verifying TLS certificates when connecting to the metrics provider. Supported formats: JKS, PEM, PKCS12. Content may be provided using a ConfigMap or a Secret.

Assign the metrics source to a Kafka cluster using the kafkaClusters.metricsSource property. The value of metricsSource is the name of the entry in the metricsSources array.

The configuration for openshift-monitoring and embedded requires no further configuration besides the type.

Example metrics configuration for Openshift monitoring
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  metricsSources:
    - name: my-ocp-prometheus
      type: openshift-monitoring
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  # ...
Example metrics configuration for standalone Prometheus monitoring
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  metricsSources:
    - name: my-custom-prometheus
      type: standalone
      url: <prometheus_instance_address>
      authentication:
        basic:
          username: my-user
          password: my-password
      trustStore:
        type: JKS
        content:
          valueFrom:
            configMapKeyRef:
              name: my-prometheus-configmap
              key: ca.jks
        password:
          value: truststore-password
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  # ...
  • metricsSources.name defines a unique name for the metrics source.

  • metricsSources.type specifies the type of metrics source. Use standalone for an external Prometheus instance.

  • metricsSources.url specifies the URL of the standalone Prometheus instance used for metrics collection.

  • metricsSources.authentication defines the authentication configuration used to access the Prometheus instance.

  • metricsSources.trustStore optionally defines a truststore used to validate the metrics provider TLS certificate. Supported formats include JKS, PEM, and PKCS12. Truststore content can be provided by using either a ConfigMap or a Secret.

  • metricsSources.trustStore.password optionally specifies the truststore password. The password can be provided as a plaintext value or via a Secret. Plaintext values are not recommended for production environments.

  • kafkaClusters.metricsSource associates the Kafka cluster with the configured metrics source.

2.3.4. Using a schema registry with Kafka

Integrate a schema registry with the console to centrally manage schemas for Kafka data. The console currently supports integration with Apicurio Registry to reference and validate schemas used in Kafka data streams. Requests to the registry can be authenticated using supported methods, including OIDC.

A placeholder for adding schema registries is provided in: examples/console/010-Console-example.yaml.

You can define schema registry connections globally as part of the console configuration using schemaRegistries properties:

schemaRegistries

Defines external schema registries that the console can connect to for schema validation and management.

authentication

Configures access to the schema registry using basic, bearer token, or oidc authentication.

trustStore

(Optional) Specifies a truststore for verifying TLS certificates when connecting to the schema registry. Supported formats: JKS, PEM, PKCS12. Content may be provided using a ConfigMap or a Secret.

Assign the schema registry source to a Kafka cluster using the kafkaClusters.schemaRegistry property. The value of schemaRegistry is the name of the entry in the schemaRegistries array.

Example schema registry configuration with OIDC authentication
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  schemaRegistries:
    - name: my-registry
      url: <schema_registry_URL>
      authentication:
        oidc:
          authServerUrl: <OIDC_discovery_URL>
          clientId: <client_id>
          clientSecret:
            valueFrom:
              secretKeyRef:
                name: my-oidc-secret
                key: client-secret
          method: POST
          grantType: CLIENT
          trustStore:
            type: JKS
            content:
              valueFrom:
                configMapKeyRef:
                  name: my-oidc-configmap
                  key: ca.jks
            password:
              value: truststore-password
      trustStore:
        type: PEM
        content:
          valueFrom:
            configMapKeyRef:
              name: my-apicurio-configmap
              key: cert-chain.pem
    kafkaClusters:
      - name: console-kafka
        namespace: kafka
        listener: secure
        metricsSource: my-ocp-prometheus
        schemaRegistry: my-registry
        credentials:
          kafkaUser:
            name: console-kafka-user1
  # ...
  • schemaRegistries.name defines a unique name for the schema registry connection.

  • schemaRegistries.url specifies the base URL of the schema registry API, typically the REST endpoint such as http://<host>/apis/registry/v2.

  • schemaRegistries.authentication defines the authentication configuration used to access the schema registry.

  • schemaRegistries.authentication.oidc.trustStore optionally defines a truststore used to validate the OIDC provider TLS certificate. Supported formats include JKS, PEM, and PKCS12. Truststore content can be provided by using either a ConfigMap or a Secret.

  • schemaRegistries.authentication.oidc.trustStore.password optionally specifies the truststore password. The password can be provided as a plaintext value or via a Secret. Plaintext values are not recommended for production environments.

  • schemaRegistries.trustStore optionally defines a truststore used to validate the schema registry TLS certificate. The supported formats and content sources are the same as for the OIDC truststore.

  • kafkaClusters.schemaRegistry associates the Kafka cluster with the configured schema registry.

3. Navigating the StreamsHub Console

The console homepage lists all connected Kafka clusters. Select a cluster to access the following pages:

Cluster overview

Summarizes cluster status, key metrics, and resource usage.

Nodes

Details broker and controller nodes, including roles, status, and partition distribution.

Topics

Lists topics with configuration details, partition data, and associated groups.

Kafka Connect

Displays information about Kafka Connect clusters, including connector status, configuration details, and available plugins.

Kafka Users

Lists Kafka users associated with the selected cluster and shows the namespace for each user.

Consumer groups

Shows group activity, offsets, lag metrics, and partition assignments.

Note
If the cluster navigation menu is hidden, click the navigation menu icon (three horizontal lines) in the console header to reveal it.

3.1. Choose how the console looks

The StreamsHub Console supports three theme options: Light, Dark, and System. When set to System, the console matches your operating system theme preference.

To change the theme, click the theme icon (sun or moon) in the console header to open the theme selector.

4. HOME: Checking connected clusters

The homepage offers a snapshot of connected Kafka clusters, providing information on the Kafka version and associated project for each cluster. To find more information, log in to a cluster.

4.1. Logging in to a Kafka cluster from the homepage

The console supports authenticated user login to a Kafka cluster using SCRAM-SHA-512 and OAuth 2.0 authentication mechanisms. For secure login, authentication must be configured in your Strimzi managed Kafka cluster.

Note
If authentication is not set up for a Kafka cluster or the credentials have been provided using the Kafka sasl.jaas.config property (which defines SASL authentication settings) in the console configuration, you can log in anonymously to the cluster without authentication.
Prerequisites
  • You must have access to a Kubernetes cluster.

  • The console must be deployed and set up to connect to a Kafka cluster.

  • For secure login, you must have appropriate authentication settings for the Kafka cluster and user.

    SCRAM-SHA-512 settings
    • Listener authentication set to scram-sha-512 in Kafka.spec.kafka.listeners[*].authentication.

    • Username and password configured in KafkaUser.spec.authentication.

    OAuth 2.0 settings
    • An OAuth 2.0 authorization server with client definitions for the Kafka cluster and users.

    • Listener authentication set to oauth in Kafka.spec.kafka.listeners[*].authentication.

For more information on configuring authentication, see the Strimzi documentation.

Procedure
  1. On the homepage, optionally filter the list of clusters by name, then click Login to cluster for a selected Kafka cluster.

  2. Enter login credentials depending on the authentication method used.

    • For SCRAM-SHA-512, enter the username and password associated with the KafkaUser.

    • For OAuth 2.0, provide a client ID and client secret that is valid for the OAuth provider configured for the Kafka listener.

  3. To end your session, click your username and select Logout, or return to the homepage.

5. Cluster overview page

The Cluster overview page shows the status of a Kafka cluster. Use this page to assess broker readiness, identify cluster errors or warnings, and monitor overall health.

At a glance, the page displays:

  • Number of topics and partitions

  • Replication status

  • Cluster metrics:

    • Used disk space

    • CPU utilization

    • Memory usage

  • Topic metrics:

    • Total incoming byte rate

    • Total outgoing byte rate

Metrics are presented as interactive charts.

  • For Cluster metrics, use the dropdown menu to view data for a specific node or for all nodes.

  • For Topic metrics, use the dropdown menu to view data for a specific topic or for all topics.

You can adjust the time range for the charts, from the last 5 minutes up to 7 days.

To view more information, click the broker and topic-related links.

5.1. Pausing reconciliation of clusters

Pause cluster reconciliation from the Cluster overview page. While reconciliation is paused, changes to the cluster configuration using the Kafka custom resource are ignored until reconciliation is resumed.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console.

    On the Cluster overview page, click Pause reconciliation.

  2. Confirm the pause, after which the Cluster overview page shows a change of status warning that reconciliation is paused.

  3. Click Resume reconciliation to restart reconciliation.

    Note
    If the status change is not displayed after pausing reconciliation, try refreshing the page.

5.2. Accessing cluster connection details for client access

Retrieve the necessary connection details from the Cluster overview page to connect a client to a Kafka cluster.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console.

    On the Cluster overview page, click Cluster connection details.

  2. Copy the bootstrap address (external or internal, depending on your client environment).

  3. Add any required connection properties to your Kafka client configuration to establish a secure connection.

    Note
    Ensure that the authentication type configured for the Kafka cluster matches the authentication type used by the client.

6. Topics page

The Topics page lists all topics created for a Kafka cluster. You can filter the list by topic name, ID, or status.

The Topics page shows the overall replication status for partitions in the topic, as well as counts for the partitions in the topic and the number of associated consumer groups. The overall storage used by the topic is also shown.

Warning
Internal topics must not be modified. You can choose to hide internal topics from the list of topics returned on the Topics page.

Click on a topic name to view additional topic information presented on a series of tabs:

Messages

Messages shows the message log for a topic.

Partitions

Partitions shows the replication status of each partition in a topic.

Consumer groups

Consumer groups lists the names and status of the consumer groups and group members connected to a topic.

Configuration

Configuration shows the configuration of a topic.

If a topic is shown as Managed, it means that is managed using the Strimzi Topic Operator and was not created directly in the Kafka cluster.

Use the information provided on the tabs to check and modify the configuration of your topics.

6.1. Creating topics

Create topics from the Topics page.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics and Create topic.

  2. Set core configuration, such as the name of the topic, and the number of topic partitions and replicas.

  3. (Optional) Specify additional configuration, such as the following:

    • Size-based and time-based message retention policies

    • Maximum message size and compression type

    • Log indexing, and cleanup and flushing of old data

  4. Review your topic configuration, then click Create topic.

    The topics are created directly in the Kafka cluster without using KafkaTopic custom resources. If you are using the Topic Operator to manage topics in unidirectional mode, create the topics using KafkaTopic resources outside the console.

    For more information on topic configuration properties, see the Apache Kafka documentation.

    Note
    For topic replication, partition leader elections can be clean or unclean. Clean leader election means that out-of-sync replicas cannot become leaders. If no in-sync replicas are available, Kafka waits until the original leader is back online before messages are picked up again.

6.2. Deleting topics

Delete topics from the Topics page.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics.

  2. Select the options icon (three vertical dots) for the relevant topic and click Delete.

  3. Enter the topic name to confirm the deletion.

6.3. Checking topic messages

Track the flow of messages for a specific topic from the Messages tab. The Messages tab presents a chronological list of messages for a topic.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics.

  2. On the Topics page, click the name of the topic you want to inspect.

  3. Check the information on the Messages tab.

    For each message, view the timestamp (in UTC), offset, key, and value.

    Click on a message to view the full message details.

    Click the Manage columns icon (represented as two-column icon) to choose the information to display.

  4. Click the search dropdown menu and select the advanced search options to refine your search.

    Choose to display the latest messages or messages from a specified time or offset. You can display messages for all partitions or a specified partition.

    When you are done, you can click the CSV icon (represented as a CSV file icon) to download the information on the returned messages.

    Refining your search

    In this example, search terms, and message, retrieval, and partition options are combined:

    • messages=timestamp:2024-03-01T00:00:00Z retrieve=50 partition=1 Error on page load where=value

      The filter searches for the text "Error on page load" in partition 1 as a message value, starting from March 1, 2024, and retrieves up to 50 messages.

    Search terms

    Enter search terms as text (has the words) to find specific matches and define where in a message to look for the term. You can search anywhere in the message or narrow the search to the key, header, or value.

    For example:

    • messages=latest retrieve=100 642-26-1594 where=key

    This example searches the latest 100 messages on message key 642-26-1594.

    Message options

    Set the starting point for returning messages.

    • Latest messages to start from the latest message.

      • messages=latest

    • From offset to start from an offset in a partition. In some cases, you may want to specify an offset without a partition. However, the most common scenario is to search by offset within a specific partition.

      • messages=offset:5600253 partition=0

    • From timestamp to start from an exact time and date in ISO 8601 format.

      • messages=timestamp:2024-03-14T00:00:00Z

    • From Unix timestamp to start from a time and date in Unix format.

      • messages=epoch:1

    Retrieval options

    Set a retrieval option.

    • Number of messages to return a specified number of messages.

      • messages=latest retrieve=50

    • Continuously to return the latest messages in real-time. Click the pause button (represented by two vertical lines) to pause the refresh. Unpause to continue the refresh.

      • retrieve=continuously

    Partition options

    Choose to run a search against all partitions or a specific partition.

6.4. Checking topic partitions

Check the partitions for a specific topic from the Partitions tab. The Partitions tab lists the partitions belonging to a topic.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics.

  2. On the Topics page, click the name of the topic you want to inspect.

  3. Check the information on the Partitions tab.

    For each partition, view the replication status, as well as information on designated partition leaders, replica brokers, and the amount of data stored by the partition.

    Replication status:

    In-sync

    All partitions in the topic are fully replicated. A partition is fully-replicated when its replicas (followers) are 'in-sync' with the designated partition leader. Replicas are 'in-sync' if they have fetched records up to the log end offset of the leader partition within an allowable lag time, as determined by replica.lag.time.max.ms.

    Under-replicated

    A partition is under-replicated if some of its replicas (followers) are not in-sync. An under-replicated status signals potential issues in data replication.

    Offline

    Some or all partitions in the topic are currently unavailable. This may be due to issues such as broker failures or network problems, which need investigating and addressing.

    In addition, check information on the broker designated as partition leader and the brokers that contain the replicas:

    Leader

    The leader handles all produce requests. Followers on other brokers replicate the leader’s data. A follower is considered in-sync if it catches up with the leader’s latest committed message.

    Preferred leader

    When creating a new topic, Kafka’s leader election algorithm assigns a leader from the list of replicas for each partition. The algorithm aims for a balanced spread of leadership assignments. A "Yes" value indicates the current leader is the preferred leader, suggesting a balanced leadership distribution. A "No" value may suggest imbalances in the leadership assignments, requiring further investigation. If the leadership assignments of partitions are not well-balanced, it can contribute to size discrepancies. A well-balanced Kafka cluster should distribute leadership roles across brokers evenly.

    Replicas

    Followers that replicate the leader’s data. Replicas provide fault tolerance and data availability.

    Note
    Discrepancies in the distribution of data across brokers may indicate balancing issues in the Kafka cluster. If certain brokers are consistently handling larger amounts of data, it may indicate that partitions are not evenly distributed across the brokers. This could lead to uneven resource utilization and potentially impact the performance of those brokers.

6.5. Checking topic consumer groups

Check the consumer groups for a specific topic from the Consumer groups tab. The Consumer groups tab lists the consumer groups associated with a topic.

Note
Monitoring consumer group behavior is essential for ensuring optimal distribution of messages between consumers.
Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics.

  2. On the Topics page, click the name of the topic you want to inspect.

  3. Check the information on the Consumer groups tab.

  4. Click a consumer group name to view consumer group members.

    Consumer groups

    For each consumer group, view the status, the overall consumer lag across all partitions, and the number of members. For more information on checking consumer groups, see Consumer Groups page.

    Group members

    For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions. For more information on checking consumer group members, see Checking consumer group members.

6.6. Checking topic configuration

Check the configuration of a specific topic from the Configuration tab. The Configuration tab lists the configuration values for the topic.

Tip
The Strimzi Topic Operator simplifies the process of creating and managing Kafka topics using KafkaTopic resources.
Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics.

  2. On the Topics page, click the name of the topic you want to inspect.

  3. Check the information on the Configuration tab.

    Filter the configuration properties to narrow results by data source:

    • DEFAULT_CONFIG: The fallback value used when no other configuration is specified.

    • STATIC_BROKER_CONFIG: Predefined, broker-wide values that apply to all topics by default.

    • DYNAMIC_TOPIC_CONFIG: Topic-specific values that override all default or broker-level settings.

6.7. Changing topic configuration

Change the configuration of a specific topic from the Configuration tab. The Configuration tab lists the configuration options for the topic.

The topics are configured directly in the Kafka cluster. If you are using the Topic Operator to manage topics in unidirectional mode, configure the topics using KafkaTopic resources outside the console.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Topics.

  2. Select the options icon (three vertical dots) for the relevant topic and click Edit configuration. Or you can click the name of the topic you want to configure from the Topics page and click the Configuration tab.

  3. Edit the configuration by updating individual property values.

    Filter the configuration properties to narrow results by data source:

    • DEFAULT_CONFIG: The fallback value used when no other configuration is specified.

    • STATIC_BROKER_CONFIG: Predefined, broker-wide values that apply to all topics by default.

    • DYNAMIC_TOPIC_CONFIG: Topic-specific values that override all default or broker-level settings.

7. Nodes page

The Nodes page lists all nodes created for a Kafka cluster, including nodes that perform broker, controller, or dual roles. You can filter the list by node pool, role (broker or controller), or status.

For broker nodes, partition distribution across the cluster is shown, including the number of partition leaders and followers.

Broker status is shown as one of the following:

Not Running

The broker has not yet been started or has been explicitly stopped.

Starting

The broker is initializing and connecting to the cluster. It is discovering and joining the metadata quorum.

Recovery

The broker has joined the cluster but is in recovery mode. It is replicating necessary data and metadata before becoming fully operational. It is not yet serving client requests.

Running

The broker is fully operational. It is registered with the controller and serving client requests.

Pending Controlled Shutdown

The broker has initiated a controlled shutdown. It will shut down gracefully after completing in-flight operations.

Shutting Down

The broker is shutting down. Client connections are closing, and internal resources are being released.

Unknown

The broker’s state is unknown, possibly due to an unexpected error or failure.

If the broker has a rack ID, it identifies the rack or datacenter in which the broker resides.

Controller status is shown as one of the following, describing the controller’s role within the metadata quorum:

Quorum leader

The controller is the active leader, coordinating cluster metadata updates and managing operations like partition reassignments and broker registrations.

Quorum follower

The controller is a follower in the metadata quorum, passively replicating updates from the leader while maintaining a synchronized state. It is ready to take over as the leader if needed.

Quorum follower lagged

The controller is a follower but has fallen behind the leader. It is not fully up to date with the latest metadata and may be ineligible for leader election until it catches up.

Unknown

The controller’s state is unknown, possibly due to an unexpected error or failure.

To view more information:

  • Click on the right arrow (>) next to a node name to view more information about the node, including its hostname and disk usage.

  • Click on a broker node ID to view configuration properties.

  • Click on the Rebalance tab to show any rebalances taking place on the cluster.

Note
Consider rebalancing if partition distribution is uneven to ensure efficient resource utilization.

7.1. Checking broker configuration

Check the configuration of a specific broker by clicking on a broker node ID. The broker page lists the configuration values for the broker.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Kafka nodes.

  2. On the Nodes page, click the name of the broker you want to inspect.

  3. Check the information on the broker page.

    Filter the configuration properties to narrow results by data source:

    • DEFAULT_CONFIG: The fallback value used when no other configuration is specified.

    • STATIC_BROKER_CONFIG: Predefined, broker-wide values that apply to all topics by default.

7.2. Managing rebalances

When you configure KafkaRebalance resources to generate optimization proposals on a cluster, you can check their status from the Rebalance tab. The Rebalance tab presents a chronological list of KafkaRebalance resources from which you can manage the optimization proposals. Filter the list by name, status, or rebalance mode.

Note
Cruise Control must be enabled to run alongside the Kafka cluster in order to use the Rebalance tab. For more information on setting up and using Cruise Control to generate proposals, see the Strimzi documentation.
Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Kafka nodes.

  2. Check the information on the Rebalance tab.

    For each rebalance, view the status and the time it was last updated.

    Table 1. Rebalance status descriptions
    StatusDescription

    New

    Resource has not been observed by the operator before

    PendingProposal

    Optimization proposal not generated

    ProposalReady

    Optimization proposal is ready for approval

    Rebalancing

    Rebalance in progress

    Stopped

    Rebalance stopped

    NotReady

    Error ocurred with the rebalance

    Ready

    Rebalance complete

    ReconciliationPaused

    Rebalance is paused

    Note
    The status of the KafkaRebalance resource changes to ReconciliationPaused when the strimzi.io/pause-reconciliation annotation is set to true in its configuration.
  3. Click on the right arrow (>) next to a rebalance name to view more information about the broker, including its rebalance mode, and whether auto-approval is enabled. If the rebalance involved brokers being removed or added, they are also listed.

    Optimization proposals can be generated in one of three modes:

    • full is the default mode and runs a full rebalance.

    • add-brokers is the mode used after adding brokers when scaling up a Kafka cluster.

    • remove-brokers is the mode used before removing brokers when scaling down a Kafka cluster.

    If auto-approval is enabled for a proposal, a successfully generated proposal goes straight into a cluster rebalance.

    Viewing optimization proposals

    Click on the name of a KafkaRebalance resource to view a generated optimization proposal. An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers.

    For more information on the properties shown on the proposal and what they mean, see the Strimzi documentation.

    Managing rebalances

    Select the options icon (three vertical dots) and click on an option to manage a rebalance.

    • Click Approve to approve a proposal. The rebalance outlined in the proposal is performed on the Kafka cluster.

    • Click Refresh to generate a fresh optimization proposal. If there has been a gap between generating a proposal and approving it, refresh the proposal so that the current state of the cluster is taken into account with a rebalance.

    • Click Stop to stop a rebalance. Rebalances can take a long time and may impact the performance of your cluster. Stopping a rebalance can help avoid performance issues and allow you to revert changes if needed.

    Note
    The options available depend on the status of the KafkaBalance resource. For example, it’s not possible to approve an optimization proposal if it’s not ready.

8. Kafka Connect page

The Kafka Connect page lists Kafka Connect connectors and clusters associated with a Kafka cluster.

You can filter each list by name.

Connectors list

Shows the state of each connector and the number of tasks it’s running, as well as the associated Kafka Connect cluster

Clusters list

Shows the Kafka Connect version, with the number of workers (or replicas) in the cluster.

If a connector is shown as Managed, it means that is managed using Strimzi.

Use the information provided on the tabs to check the configuration of your Kafka Connect environment. Click on a name to view additional information on specific connectors and clusters.

8.1. Checking Kafka Connect connectors

Track the status of a specific Kafka Connect connector from the Connectors tab. The Connectors tab lists the connectors associated with a Kafka Connect cluster.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Kafka Connect.

  2. On the Kafka Connect page, click the name of the connector you want to inspect.

  3. Review the connector details:

    Connect worker ID

    Identifies the Kafka Connect worker that runs the connector.

    State

    Shows the current status of the connector, such as Running, Paused, or Failed.

    Topics

    Lists the Kafka topics used by the connector to produce or consume data.

    Class

    Displays the Java class that implements the connector logic.

    Type

    Indicates whether the connector is a source or sink connector.

    Max tasks

    Specifies the maximum number of concurrent tasks that the connector can run.

  4. Click the Connectors and Configuration tabs to view the following information:

    Connectors tab:

    Task ID

    Identifies a specific task instance of the connector.

    State

    Shows the current status of the task, such as Running, Paused, or Failed.

    Worker ID

    Identifies the Kafka Connect worker that runs the task.

    Configuration tab: Displays the property values that define how the connector is configured.

8.2. Checking Kafka Connect clusters

Track the status of a specific Kafka Connect cluster from the Connect clusters tab. The Connect clusters tab lists the Kafka Connect clusters associated with the Kafka cluster.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Kafka Connect.

  2. On the Kafka Connect page, click the Connect clusters tab, then click the name of the cluster you want to inspect.

    The page displays the number of workers used to run connectors and their tasks. This information is available only for Kafka Connect clusters managed by Strimzi.

  3. Click the Connectors and Plugins tabs to view the following information:

    Connectors tab:

    Name

    Displays the name of the connector deployed in the cluster.

    Type

    Indicates whether the connector is a source or sink connector.

    State

    Shows the current status of the connector, such as Running, Paused, or Failed.

    Replicas

    Displays the number of task replicas currently running for the connector.

    Plugins tab:

    Class

    Displays the fully qualified Java class name of the connector implementation.

    Type

    Indicates the connector type.

    Version

    Shows the plugin version provided by the Kafka Connect runtime.

    Plugins provide the implementation classes used to create connectors. The plugins list includes plugins for standard source and sink connectors, as well as the connectors used by Kafka MirrorMaker.

9. Kafka Users page

The Kafka Users page lists Kafka users associated a Kafka cluster. It provides visibility of user authentication settings and configured access control lists (ACLs).

You can search by username to filter the list of users.

The list displays the following information for each user:

Name

The name of the Kafka user resource.

Namespace

The Kubernetes namespace where the user is defined.

Creation Time

The date and time the user was created.

Username

The Kafka username.

Authentication

The configured authentication type, such as scram-sha-512.

Click a username to view detailed information about the selected user, including authentication settings and any configured authorization rules (ACLs).

9.1. Checking Kafka users

View authentication details and authorization rules for a specific Kafka user from the Kafka Users page.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, and click Kafka Users.

  2. (Optional) Enter a username in the search field to filter the list.

  3. Click the username of the user you want to inspect.

  4. Review the user details:

    Name

    The name of the Kafka user resource.

    Username

    The Kafka username.

    Authentication

    The configured authentication type.

    Namespace

    The namespace where the user is defined.

    Creation Time

    The date and time the user was created.

  5. Review the Authorization section:

    • If no rules are configured, the page displays No authorization rules.

    • If authorization rules are configured, a table lists the details of each configured access control rule. Each rule specifies the resources and operations to which it applies, and whether access is allowed or denied.

      Type

      The resource type, such as topic, group, or cluster.

      Name

      The name of the resource. For cluster resource types, no name is specified.

      Pattern type

      The pattern used to match the resource name. literal matches the exact name specified. prefix applies the rule to all resources with names that start with the specified value. With the literal pattern type, the name can be set to * to apply the rule to all resources.

      Host

      The host to which the rule applies. An asterisk (*) indicates any host.

      Operations

      One or more operations to which the rule applies, shown as a comma-separated list, such as Read or Describe. Supported operations depend on the resource type.

      Permission

      The permission type, such as allow or deny.

10. Consumer Groups page

The Consumer Groups page lists all consumer groups associated with a Kafka cluster. You can filter the list by consumer group name or status.

For each consumer group, view the status, the overall consumer lag across all partitions, and the number of members. Click on associated topics to show the topic information available from the Topics page tabs.

Consumer group status can be one of the following:

  • Stable indicates normal functioning

  • Rebalancing indicates ongoing adjustments to the consumer group’s members.

  • Empty suggests no active members. If in the empty state, consider adding members to the group.

Click on a consumer group name to check group members. Select the options icon (three vertical dots) against a consumer group to reset consumer offsets.

10.1. Checking consumer group members

Check the members of a specific consumer group from the Consumer Groups page.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Consumer Groups.

  2. On the Consumer Groups page, click the name of the consumer group you want to inspect.

  3. Click on the right arrow (>) next to a member ID to view the topic partitions a member is associated with, as well as any possible consumer lag.

    For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions.

    Any consumer lag for a specific topic partition reflects the gap between the last message a consumer has picked up (committed offset position) and the latest message written by the producer (end offset position).

10.2. Resetting consumer offsets

Reset the consumer offsets of a specific consumer group from the Consumer Groups page.

You might want to do this when reprocessing old data, skipping unwanted messages, or recovering from downtime.

Prerequisites

All active members of the consumer group must be shut down before resetting the consumer offsets.

Procedure
  1. Log in to the Kafka cluster in the StreamsHub Console, then click Consumer Groups.

  2. Click the options icon (three vertical dots) for the consumer group and click the reset consumer offsets option to display the Reset consumer offset page.

  3. Choose to apply the offset reset to all consumer topics associated with the consumer group or select a specific topic.

    If you selected a topic, choose to apply the offset reset to all partitions or select a specific partition.

  4. Choose the position to reset the offset:

    • Custom offset (available only if you selected a specific topic and a specific partition). If you select this option, enter the custom offset value.

    • Latest offset

    • Earliest offset

    • Specific date and time

    • Delete committed offsets (available only if you selected a specific topic).

      If you selected date and time, choose the appropriate format and enter the date in that format.

      Important
      The Delete committed offsets option removes the stored offsets from the consumer group. When the group resumes, offset behavior depends on the consumer configuration (for example, the auto.offset.reset setting). Use this option with caution.
  5. (Optional) Perform a dry run before applying the offset reset.

    1. Click the down arrow next to Dry run.

    2. Choose the option to run the dry run and view the results in the console, or copy the command to run it independently against the consumer group.

      The results display the new offsets for each topic partition included in the reset operation. A download option is available for the results.

  6. Click Reset to perform the offset reset.