1. StreamsHub Console overview
The StreamsHub Console provides a user interface to facilitate the administration of Kafka clusters, delivering real-time insights for monitoring, managing, and optimizing each cluster from its user interface.
Connect a Kafka cluster managed by Strimzi to gain real-time insights and optimize cluster performance from its user interface. The console’s homepage displays connected Kafka clusters, allowing you to access detailed information on components such as brokers, topics, partitions, and consumer groups.
From the console, you can view the status of a Kafka cluster before navigating to view information on the cluster’s brokers and topics, or the consumer groups connected to the Kafka cluster.
2. Deploying the console
Deploy the console using the dedicated operator. After installing the operator, you can create instances of the console.
For each console instance, the operator needs a Prometheus instance to collect and display Kafka cluster metrics. You can configure the console to use an existing Prometheus source. If no source is set, the operator creates a private Prometheus instance when the console is deployed. However, this default setup is not recommended for production and should only be used for development or evaluation purposes.
2.1. Deployment prerequisites
To deploy the console, you need the following:
A Kubernetes 1.25 cluster.
The
kubectl
command-line tool is installed and configured to connect to the Kubernetes cluster.Access to the Kubernetes cluster using an account with
cluster-admin
permissions, such assystem-admin
.A Kafka cluster managed by Strimzi, running on the Kubernetes cluster.
Example files are provided for installing a Kafka cluster managed by Strimzi, along with a Kafka user representing the console. These files offer the fastest way to set up and try the console, but you can also use your own Strimzi managed Kafka deployment.
2.1.1. Using your own Kafka cluster
If you use your own Strimzi deployment, verify the configuration by comparing it with the example deployment files provided with the console.
For each Kafka cluster, the Kafka
resource used to install the cluster must be configured with the following:
Sufficient authorization for the console to connect
Metrics properties for the console to be able to display certain data
The metrics configuration must match the properties specified in the example
Kafka
(console-kafka
) andConfigMap
(console-kafka-metrics
) resources.
2.1.2. Deploying a new Kafka cluster
If you already have Strimzi installed but want to create a new Kafka cluster for use with the console, example deployment resources are available to help you get started.
These resources create the following:
A Kafka cluster in KRaft mode with SCRAM-SHA-512 authentication.
A Strimzi
KafkaNodePool
resource to manage the cluster nodes.A
KafkaUser
resource to enable authenticated and authorized console connections to the Kafka cluster.
The KafkaUser
custom resource in the 040-KafkaUser-console-kafka-user1.yaml
file includes the necessary ACL types to provide authorized access for the console to the Kafka cluster.
The minimum required ACL rules are configured as follows:
Describe
,DescribeConfigs
permissions for thecluster
resourceRead
,Describe
,DescribeConfigs
permissions for alltopic
resourcesRead
,Describe
permissions for allgroup
resources
Note | To ensure the console has the necessary access to function, a minimum level of authorization must be configured for the principal used in each Kafka cluster connection. The specific permissions may vary based on the authorization framework in use, such as ACLs, Keycloak authorization, OPA, or a custom solution. |
When configuring the KafkaUser
authentication and authorization, ensure they match the corresponding Kafka
configuration:
KafkaUser.spec.authentication
should matchKafka.spec.kafka.listeners[*].authentication
.KafkaUser.spec.authorization
should matchKafka.spec.kafka.authorization
.
A Kubernetes 1.25 cluster.
Access to the Kubernetes web console using an account with
cluster-admin
permissions, such assystem:admin
.The
kubectl
command-line tool is installed and configured to connect to the Kubernetes cluster.
Download and extract the console installation artifacts.
The artifacts are included with installation and example files available from the release page.
The artifacts provide the deployment YAML files to the install the Kafka cluster. Use the sample installation files located in
examples/kafka
.Set environment variables to update the installation files:
export NAMESPACE=kafka # (1) export LISTENER_TYPE=route # (2) export CLUSTER_DOMAIN=<domain_name> # (3)
The namespace in which you want to deploy the Kafka operator.
The listener type used to expose Kafka to the console.
The cluster domain name for your Kubernetes cluster.
In this example, the namespace variable is defined as
kafka
and the listener type isroute
.Install the Kafka cluster.
Run the following command to apply the YAML files and deploy the Kafka cluster to the defined namespace:
cat examples/kafka/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -
This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified Kubernetes namespace.
Check the status of the deployment:
oc get pods -n kafka
Output shows the operators and cluster readinessNAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-console-nodepool-0 1/1 Running 0 console-kafka-console-nodepool-1 1/1 Running 0 console-kafka-console-nodepool-2 1/1 Running 0
console-kafka
is the name of the cluster.console-nodepool
is the name of the node pool.A node ID identifies the nodes created.
With the default deployment, you install three nodes.
READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.
2.2. Installing the console operator
Install the console operator using one of the following methods:
By applying a
Console
Custom Resource Definition (CRD)Once the StreamsHub operator is submitted to the OperatorHub (see Issue 1526 for details), you can install the operator:
By using the Operator Lifecycle Manager (OLM) command line interface (CLI)
From the OperatorHub in the OpenShift web console (Openshift Clusters only)
The recommended approach is to install the operator via the Kubernetes CLI (kubectl
) using the Operator Lifecycle Manager (OLM) resources.
If using the OLM is not suitable for your environment, you can install the operator by applying the CRD directly.
2.2.1. Deploying the console operator using a CRD
This procedure describes how to install the StreamsHub Console operator using a Custom Resource Definition (CRD).
Download and extract the console installation artifacts.
The artifacts are included with installation and example files available from the release page.
The artifacts include a Custom Resource Definition (CRD) file (
console-operator.yaml
) to install the operator without the OLM.Set an environment variable to define the namespace where you want to install the operator:
export NAMESPACE=operator-namespace
In this example, the namespace variable is defined as
operator-namespace
.Install the console operator with the CRD.
Use the sample installation files located in
install/console-operator/non-olm
. These resources install the operator with cluster-wide scope, allowing it to manage console resources across all namespaces. Run the following command to apply the YAML file:cat install/console-operator/non-olm/console-operator.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -
This command reads the YAML file, replaces the namespace environment variables, and applies the resulting configuration to the specified Kubernetes namespace.
Check the status of the deployment:
oc get pods -n operator-namespace
Output shows the deployment name and readinessNAME READY STATUS RESTARTS console-operator 1/1 Running 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.Use the console operator to deploy the console and connect to a Kafka cluster.
2.3. Deploying and connecting the console to a Kafka cluster
Use the console operator to deploy the StreamsHub Console to the same Kubernetes cluster as a Kafka cluster managed by Strimzi. Use the console to connect to the Kafka cluster.
The console operator is deployed to the Kubernetes cluster.
Create a
Console
custom resource in the desired namespace.If you deployed the example Kafka cluster provided with the installation artifacts, you can use the configuration specified in the
examples/console/010-Console-example.yaml
configuration file unchanged.Otherwise, configure the resource to connect to your Kafka cluster.
Example console configurationapiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain> # (1) kafkaClusters: - name: console-kafka # (2) namespace: kafka # (3) listener: secure # (4) properties: values: [] # (5) valuesFrom: [] # (6) credentials: kafkaUser: name: console-kafka-user1 # (7)
Hostname to access the console by HTTP.
Name of the
Kafka
resource representing the cluster.Namespace of the Kafka cluster.
Listener to expose the Kafka cluster for console connection.
(Optional) Add connection properties if needed.
(optional) References to config maps or secrets, if needed.
(Optional) Kafka user created for authenticated access to the Kafka cluster.
Apply the
Console
configuration to install the console.In this example, the console is deployed to the
console-namespace
namespace:kubectl apply -f examples/console/010-Console-example.yaml -n console-namespace
Check the status of the deployment:
oc get pods -n console-namespace
Output shows the deployment name and readinessNAME READY STATUS RUNNING console-kafka 1/1 1 1
Access the console.
When the console is running, use the hostname specified in the
Console
resource (spec.hostname
) to access the user interface.
2.3.1. Using an OIDC provider to secure access to Kafka clusters
Enable secure console connections to Kafka clusters using an OIDC provider. Configure the console deployment to configure connections to any Identity Provider (IdP), such as Keycloak or Dex, that supports OpenID Connect (OIDC). Also define the subjects and roles for user authorization. The security profiles can be configured for all Kafka cluster connections on a global level, though you can add roles and rules for specific Kafka clusters.
An example configuration is provided in the following file: examples/console/console-security-oidc.yaml
.
The configuration introduces the following additional properties for console deployment:
security
Properties that define the connection details for the console to connect with the OIDC provider.
subjects
Specifies the subjects (users or groups) and their roles in terms of JWT claims or explicit subject names, determining access permissions.
roles
Defines the roles and associated access rules for users, specifying which resources (like Kafka clusters) they can interact with and what operations they are permitted to perform.
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
security:
oidc:
authServerUrl: <OIDC_discovery_URL> # (1)
clientId: <client_id> # (2)
clientSecret: # (3)
valueFrom:
secretKeyRef:
name: my-oidc-secret
key: client-secret
trustStore: # (4)
type: JKS
content:
valueFrom:
configMapKeyRef:
name: my-oidc-configmap
key: ca.jks
password: # (5)
value: truststore-password
subjects:
- claim: groups # (6)
include: # (7)
- <team_name_1>
- <team_name_2>
roleNames: # (8)
- developers
- claim: groups
include:
- <team_name_3>
roleNames:
- administrators
- include: #(9)
- <user_1>
- <user_2>
roleNames:
- administrators
roles:
- name: developers
rules:
- resources: # (10)
- kafkas
resourceNames: # (11)
- <dev_cluster_a>
- <dev_cluster_b>
privileges: # (12)
- '*'
- name: administrators
rules:
- resources:
- kafkas
privileges:
- '*'
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
credentials:
kafkaUser:
name: console-kafka-user1
URL for OIDC provider discovery.
Client ID for OIDC authentication to identify the client.
Client secret and client ID used for authentication.
Optional truststore used to validate the OIDC provider’s TLS certificate. Supported formats include
JKS
,PEM
, andPKCS12
. Truststore content can be provided using either aConfigMap
(configMapKeyRef
) or aSecret
(secretKeyRef
).Optional password for the truststore. Can be provided as a plaintext value (as shown) or more securely by reference to a
Secret
. Plaintext values are not recommended for production.JWT claim types or names to identify the users or groups.
Users or groups included under the specified claim.
Roles assigned to the specified users or groups.
Specific users included by name when no claim is specified.
Resources that the assigned role can access.
Specific resource names accessible by the assigned role.
Privileges granted to the assigned role for the specified resources.
If you want to specify roles and rules for individual Kafka clusters, add the details under kafka.clusters[].security.roles[]
.
In the following example, the console-kafka
cluster allows developers to list and view selected Kafka resources.
Administrators can also update certain resources.
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
# ...
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
credentials:
kafkaUser:
name: console-kafka-user1
security:
roles:
- name: developers
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
privileges:
- get
- list
- name: administrators
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
- nodes/configs
privileges:
- get
- list
- resources:
- consumerGroups
- rebalances
privileges:
- update
Optional OIDC authentication properties
The following properties can be used to further configure oidc
authentication.
These apply to any part of the console configuration that supports authentication.oidc
, such as schema registries or metrics providers.
- grantType
Specifies the OIDC grant type to use. Required when using non-interactive authentication flows, where no user login is involved. Supported values:
CLIENT
: Requires a client ID and secret.PASSWORD
: Requires a client ID and secret, plus user credentials (username
andpassword
) provided throughgrantOptions
.
- grantOptions
Additional parameters specific to the selected grant type. Use
grantOptions
to provide properties such asusername
andpassword
when using thePASSWORD
grant type.oidc: grantOptions: username: my-user password: <my_password>
- method
Method for passing the client ID and secret to the OIDC provider. Supported values:
BASIC
: (default) Uses HTTP Basic authentication.POST
: Sends credentials as form parameters.
- scopes
Optional list of access token scopes to request from the OIDC provider. Defaults are usually defined by the OIDC client configuration. Specify this property if access to the target service requires additional or alternative scopes not granted by default.
oidc: scopes: - openid - registry:read - registry:write
- absoluteExpiresIn
Optional boolean. If set to
true
, theexpires_in
token property is treated as an absolute timestamp instead of a duration.
2.3.2. Enabling a metrics provider
Configure the console deployment to enable a metrics provider. You can set up configuration to use one of the following sources to scrape metrics from Kafka clusters using Prometheus:
OpenShift’s built-in user workload monitoring
Use OpenShift’s workload monitoring, incorporating the Prometheus operator, to monitor console services and workloads without the need for an additional monitoring solution.A standalone Prometheus instance
Provide the details and credentials to connect with your own Prometheus instance.An embedded Prometheus instance (default)
Deploy a private Prometheus instance for use only by the console instance. The instance is configured to retrieve metrics from all Strimzi managed Kafka instances in the same Kubernetes cluster. Using embedded metrics is intended for development and evaluation only, not production.
Example configuration for OpenShift monitoring and a standalone Prometheus instance is provided in the following files:
examples/console/console-openshift-metrics.yaml
examples/console/console-standalone-prometheus.yaml
You can define Prometheus sources globally as part of the console configuration using metricsSources
properties:
metricsSources
Declares one or more metrics providers that the console can use to collect metrics.
type
Specifies the type of metrics source. Valid options:
openshift-monitoring
standalone
(external Prometheus)embedded
(console-managed Prometheus)
url
For
standalone
sources, specifies the base URL of the Prometheus instance.authentication
(For
standalone
andopenshift-monitoring
only) Configures access to the metrics provider usingbasic
,bearer
token, oroidc
authentication.trustStore
(Optional, for
standalone
only) Specifies a truststore for verifying TLS certificates when connecting to the metrics provider. Supported formats:JKS
,PEM
,PKCS12
. Content may be provided using aConfigMap
or aSecret
.
Assign the metrics source to a Kafka cluster using the kafkaClusters.metricsSource
property.
The value of metricsSource
is the name
of the entry in the metricsSources
array.
The configuration for openshift-monitoring
and embedded
requires no further configuration besides the type
.
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
# ...
metricsSources:
- name: my-ocp-prometheus
type: openshift-monitoring
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
credentials:
kafkaUser:
name: console-kafka-user1
# ...
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
# ...
metricsSources:
- name: my-custom-prometheus
type: standalone
url: <prometheus_instance_address> # (1)
authentication: # (2)
username: my-user
password: my-password
trustStore: # (3)
type: JKS
content:
valueFrom:
configMapKeyRef:
name: my-prometheus-configmap
key: ca.jks
password: # (4)
value: truststore-password
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
credentials:
kafkaUser:
name: console-kafka-user1
# ...
URL of the standalone Prometheus instance for metrics collection.
Authentication credentials for accessing the Prometheus instance. Supported authentication methods:
basic
: Requiresusername
andpassword
.bearer
: Requirestoken
.oidc
: See Using an OIDC provider to secure access to Kafka clusters for details.
Optional truststore used to validate the metrics provider’s TLS certificate. Supported formats include
JKS
,PEM
, andPKCS12
. Truststore content can be provided using either aConfigMap
(configMapKeyRef
) or aSecret
(secretKeyRef
).Optional password for the truststore. Can be provided as a plaintext value (as shown) or via a
Secret
. Plaintext values are not recommended for production.
2.3.3. Using a schema registry with Kafka
Integrate a schema registry with the console to centrally manage schemas for Kafka data. The console currently supports integration with Apicurio Registry to reference and validate schemas used in Kafka data streams. Requests to the registry can be authenticated using supported methods, including OIDC.
A placeholder for adding schema registries is provided in: examples/console/010-Console-example.yaml
.
You can define schema registry connections globally as part of the console configuration using schemaRegistries
properties:
schemaRegistries
Defines external schema registries that the console can connect to for schema validation and management.
authentication
Configures access to the schema registry using
basic
,bearer
token, oroidc
authentication.trustStore
(Optional) Specifies a truststore for verifying TLS certificates when connecting to the schema registry. Supported formats:
JKS
,PEM
,PKCS12
. Content may be provided using aConfigMap
or aSecret
.
Assign the schema registry source to a Kafka cluster using the kafkaClusters.schemaRegistry
property.
The value of schemaRegistry
is the name
of the entry in the schemaRegistries
array.
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: my-console
spec:
hostname: my-console.<cluster_domain>
schemaRegistries:
- name: my-registry # (1)
url: <schema_registry_URL> # (2)
authentication: # (3)
oidc:
authServerUrl: <OIDC_discovery_URL>
clientId: <client_id>
clientSecret:
valueFrom:
secretKeyRef:
name: my-oidc-secret
key: client-secret
method: POST
grantType: CLIENT
trustStore: # (4)
type: JKS
content:
valueFrom:
configMapKeyRef:
name: my-oidc-configmap
key: ca.jks
password: # (5)
value: truststore-password
trustStore: # (6)
type: PEM
content:
valueFrom:
configMapKeyRef:
name: my-apicurio-configmap
key: cert-chain.pem
kafkaClusters:
- name: console-kafka
namespace: kafka
listener: secure
metricsSource: my-ocp-prometheus
schemaRegistry: my-registry
credentials:
kafkaUser:
name: console-kafka-user1
# ...
A unique name for the schema registry connection.
Base URL of the schema registry API. This is typically the REST endpoint, such as http://<host>/apis/registry/v2.
Authentication credentials for accessing the schema registry. Supported authentication methods:
basic
: Requiresusername
andpassword
.bearer
: Requirestoken
.oidc
: See Using an OIDC provider to secure access to Kafka clusters for details.
Optional truststore used to validate the OIDC provider’s TLS certificate. Supported formats include
JKS
,PEM
, andPKCS12
. Truststore content can be provided using either aConfigMap
(configMapKeyRef
) or aSecret
(secretKeyRef
).Optional password for the truststore. Can be provided as a plaintext value (as shown) or via a
Secret
. Plaintext values are not recommended for production.Optional truststore used to validate the schema registry’s TLS certificate. Configuration format and source options are the same as for the OIDC truststore.
3. Navigating the StreamsHub Console
When you open the StreamsHub Console, the homepage displays a list of connected Kafka clusters. Click a cluster name to view its details from the following pages:
- Cluster overview
Displays high-level information about the Kafka cluster, including its status, key metrics, and resource utilization.
- Nodes
Provides details on broker and controller nodes, including their roles, operational status, and partition distribution.
- Topics
Lists topics and their configurations, including partition-specific information and connected consumer groups.
- Consumer groups
Displays consumer group activity, including offsets, lag metrics, and partition assignments.
Start with the cluster overview, then drill down into individual nodes, inspect topic configurations, or monitor consumer group activity.
Note | If the side menu for navigation of a cluster is hidden, click the hamburger menu (three horizontal lines) in the console header. |
4. HOME: Checking connected clusters
The homepage offers a snapshot of connected Kafka clusters, providing information on the Kafka version and associated project for each cluster. To find more information, log in to a cluster.
4.1. Logging in to a Kafka cluster from the homepage
The console supports authenticated user login to a Kafka cluster using SCRAM-SHA-512 and OAuth 2.0 authentication mechanisms. For secure login, authentication must be configured in your Strimzi managed Kafka cluster.
Note | If authentication is not set up for a Kafka cluster or the credentials have been provided using the Kafka sasl.jaas.config property (which defines SASL authentication settings) in the console configuration, you can log in anonymously to the cluster without authentication. |
You must have access to a Kubernetes cluster.
The console must be deployed and set up to connect to a Kafka cluster.
For secure login, you must have appropriate authentication settings for the Kafka cluster and user.
- SCRAM-SHA-512 settings
Listener authentication set to
scram-sha-512
inKafka.spec.kafka.listeners[*].authentication
.Username and password configured in
KafkaUser.spec.authentication
.
- OAuth 2.0 settings
An OAuth 2.0 authorization server with client definitions for the Kafka cluster and users.
Listener authentication set to
oauth
inKafka.spec.kafka.listeners[*].authentication
.
For more information on configuring authentication, see the Strimzi documentation.
From the homepage, optionally filter the list of clusters by name, then click Login to cluster for a selected Kafka cluster.
Enter login credentials depending on the authentication method used.
For SCRAM-SHA-512, enter the username and password associated with the
KafkaUser
.For OAuth 2.0, provide a client ID and client secret that is valid for the OAuth provider configured for the Kafka listener.
To end your session, click your username and select Logout, or return to the homepage.
5. Cluster overview page
The Cluster overview page shows the status of a Kafka cluster. Here, you can assess the readiness of Kafka brokers, identify any cluster errors or warnings, and gain insights into the cluster’s health. At a glance, the page provides information on the number of topics and partitions within the cluster, along with their replication status. Explore cluster metrics through charts displaying used disk space, CPU utilization, and memory usage. Additionally, topic metrics offer a comprehensive view of total incoming and outgoing byte rates for all topics in the Kafka cluster.
5.1. Pausing reconciliation of clusters
Pause cluster reconciliation from the Cluster overview page.
While reconciliation is paused, changes to the cluster configuration using the Kafka
custom resource are ignored until reconciliation is resumed.
Log in to the Kafka cluster in the StreamsHub Console.
On the Cluster overview page, click Pause reconciliation.Confirm the pause, after which the Cluster overview page shows a change of status warning that reconciliation is paused.
Click Resume reconciliation to restart reconciliation.
Note | If the status change is not displayed after pausing reconciliation, try refreshing the page. |
5.2. Accessing cluster connection details for client access
Retrieve the necessary connection details from the Cluster overview page to connect a client to a Kafka cluster.
Log in to the Kafka cluster in the StreamsHub Console.
On the Cluster overview page, click Cluster connection details.Copy the bootstrap address (external or internal, depending on your client environment).
Add any required connection properties to your Kafka client configuration to establish a secure connection.
Note | Ensure that the authentication type configured for the Kafka cluster matches the authentication type used by the client. |
6. Topics page
The Topics page lists all topics created for a Kafka cluster. You can filter the list by topic name, ID, or status.
The Topics page shows the overall replication status for partitions in the topic, as well as counts for the partitions in the topic and the number of associated consumer groups. The overall storage used by the topic is also shown.
Warning | Internal topics must not be modified. You can choose to hide internal topics from the list of topics returned on the Topics page. |
Click on a topic name to view additional topic information presented on a series of tabs:
- Messages
Messages shows the message log for a topic.
- Partitions
Partitions shows the replication status of each partition in a topic.
- Consumer groups
Consumer groups lists the names and status of the consumer groups and group members connected to a topic.
- Configuration
Configuration shows the configuration of a topic.
If a topic is shown as Managed, it means that is managed using the Strimzi Topic Operator and was not created directly in the Kafka cluster.
Use the information provided on the tabs to check and modify the configuration of your topics.
6.1. Creating topics
Create topics from the Topics page.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics and Create topic.
Set core configuration, such as the name of the topic, and the number of topic partitions and replicas.
(Optional) Specify additional configuration, such as the following:
Size-based and time-based message retention policies
Maximum message size and compression type
Log indexing, and cleanup and flushing of old data
Review your topic configuration, then click Create topic.
The topics are created directly in the Kafka cluster without using KafkaTopic
custom resources.
If you are using the Topic Operator to manage topics in unidirectional mode, create the topics using KafkaTopic
resources outside the console.
For more information on topic configuration properties, see the {kafkaDoc}.
Note | For topic replication, partition leader elections can be clean or unclean. Clean leader election means that out-of-sync replicas cannot become leaders. If no in-sync replicas are available, Kafka waits until the original leader is back online before messages are picked up again. |
6.2. Deleting topics
Delete topics from the Topics page.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics.
Select the options icon (three vertical dots) for the relevant topic and click Delete.
Enter the topic name to confirm the deletion.
6.3. Checking topic messages
Track the flow of messages for a specific topic from the Messages tab. The Messages tab presents a chronological list of messages for a topic.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics.
From the Topics page, click the name of the topic you want to inspect.
Check the information on the Messages tab.
For each message, you can view its timestamp (in UTC), offset, key, and value.
Click on a message to view the full message details.
Click the Manage columns icon (represented as two columns) to choose the information to display.
Click the search dropdown and select the advanced search options to refine your search.
Choose to display the latest messages or messages from a specified time or offset. You can display messages for all partitions or a specified partition.
When you are done, you can click the CSV icon (represented as a CSV file) to download the information on the returned messages.
In this example, search terms, and message, retrieval, and partition options are combined:
messages=timestamp:2024-03-01T00:00:00Z retrieve=50 partition=1 Error on page load where=value
The filter searches for the text "Error on page load" in partition 1 as a message value, starting from March 1, 2024, and retrieves up to 50 messages.
- Search terms
Enter search terms as text (has the words) to find specific matches and define where in a message to look for the term. You can search anywhere in the message or narrow the search to the key, header, or value.
For example:
messages=latest retrieve=100 642-26-1594 where=key
This example searches the latest 100 messages on message key
642-26-1594
.- Message options
Set the starting point for returning messages.
Latest messages to start from the latest message.
messages=latest
From offset to start from an offset in a partition. In some cases, you may want to specify an offset without a partition. However, the most common scenario is to search by offset within a specific partition.
messages=offset:5600253 partition=0
From Unix timestamp to start from a time and date in Unix format.
messages=epoch:1
From timestamp to start from an exact time and date in ISO 8601 format.
messages=timestamp:2024-03-14T00:00:00Z
- Retrieval options
Set a retrieval option.
Number of messages to return a specified number of messages.
messages=latest retrieve=50
Continuously to return the latest messages in real-time. Click the pause button (represented by two vertical lines) to pause the refresh. Unpause to continue the refresh.
retrieve=continuously
- Partition options
Choose to run a search against all partitions or a specific partition.
6.4. Checking topic partitions
Check the partitions for a specific topic from the Partitions tab. The Partitions tab presents a list of partitions belonging to a topic.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics.
From the Topics page, click the name of the topic you want to inspect.
Check the information on the Partitions tab.
For each partition, you can view its replication status, as well as information on designated partition leaders, replica brokers, and the amount of data stored by the partition.
You can view partitions by replication status:
- In-sync
All partitions in the topic are fully replicated. A partition is fully-replicated when its replicas (followers) are 'in-sync' with the designated partition leader. Replicas are 'in-sync' if they have fetched records up to the log end offset of the leader partition within an allowable lag time, as determined by
replica.lag.time.max.ms
.- Under-replicated
A partition is under-replicated if some of its replicas (followers) are not in-sync. An under-replicated status signals potential issues in data replication.
- Offline
Some or all partitions in the topic are currently unavailable. This may be due to issues such as broker failures or network problems, which need investigating and addressing.
You can also check information on the broker designated as partition leader and the brokers that contain the replicas:
- Leader
The leader handles all produce requests. Followers on other brokers replicate the leader’s data. A follower is considered in-sync if it catches up with the leader’s latest committed message.
- Preferred leader
When creating a new topic, Kafka’s leader election algorithm assigns a leader from the list of replicas for each partition. The algorithm aims for a balanced spread of leadership assignments. A "Yes" value indicates the current leader is the preferred leader, suggesting a balanced leadership distribution. A "No" value may suggest imbalances in the leadership assignments, requiring further investigation. If the leadership assignments of partitions are not well-balanced, it can contribute to size discrepancies. A well-balanced Kafka cluster should distribute leadership roles across brokers evenly.
- Replicas
Followers that replicate the leader’s data. Replicas provide fault tolerance and data availability.
Note | Discrepancies in the distribution of data across brokers may indicate balancing issues in the Kafka cluster. If certain brokers are consistently handling larger amounts of data, it may indicate that partitions are not evenly distributed across the brokers. This could lead to uneven resource utilization and potentially impact the performance of those brokers. |
6.5. Checking topic consumer groups
Check the consumer groups for a specific topic from the Consumer groups tab. The Consumer groups tab presents a list of consumer groups associated with a topic.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics.
From the Topics page, click the name of the topic you want to inspect.
Check the information on the Consumer groups tab.
Click a consumer group name to view consumer group members.
For each consumer group, you can view its status, the overall consumer lag across all partitions, and the number of members. For more information on checking consumer groups, see Consumer Groups page.
For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions. For more information on checking consumer group members, see Checking consumer group members.
Note | Monitoring consumer group behavior is essential for ensuring optimal distribution of messages between consumers. |
6.6. Checking topic configuration
Check the configuration of a specific topic from the Configuration tab. The Configuration tab presents a list of configuration values for the topic.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics.
From the Topics page, click the name of the topic you want to inspect.
Check the information on the Configuration tab.
You can filter for the properties you want to check, including selecting by data source:
DEFAULT_CONFIG properties have a predefined default value. This value is used when there are no user-defined values for those properties.
STATIC_BROKER_CONFIG properties have predefined values that apply to the entire broker and, by extension, to all topics managed by that broker. This value is used when there are no user-defined values for those properties.
DYNAMIC_TOPIC_CONFIG property values have been configured for a specific topic and override the default configuration values.
Tip | The Strimzi Topic Operator simplifies the process of creating managing Kafka topics using KafkaTopic resources. |
6.7. Changing topic configuration
Change the configuration of a specific topic from the Configuration tab. The Configuration tab presents a list of configuration options for the topic.
The topics are configured directly in the Kafka cluster.
If you are using the Topic Operator to manage topics in unidirectional mode, configure the topics using KafkaTopic
resources outside the console.
Log in to the Kafka cluster in the StreamsHub Console, then click Topics.
Select the options icon (three vertical dots) for the relevant topic and click Edit configuration. Or you can click the name of the topic you want to configure from the Topics page and click the Configuration tab.
Edit the configuration by updating individual property values. You can filter for the properties you want to configure, including selecting by data source:
DEFAULT_CONFIG properties have a predefined default value. This value is used when there are no user-defined values for those properties.
STATIC_BROKER_CONFIG properties have predefined values that apply to the entire broker and, by extension, to all topics managed by that broker. This value is used when there are no user-defined values for those properties.
DYNAMIC_TOPIC_CONFIG property values have been configured for a specific topic and override the default configuration values.
7. Nodes page
The Nodes page lists all nodes created for a Kafka cluster, including nodes that perform broker, controller, or dual roles. You can filter the list by node pool, role (broker or controller), or status.
For each node, you can view its status. For broker nodes, the page shows partition distribution across the cluster, including the number of partition leaders and followers.
Broker status is shown as one of the following:
- Not Running
The broker has not yet been started or has been explicitly stopped.
- Starting
The broker is initializing and connecting to the cluster. It is discovering and joining the metadata quorum.
- Recovery
The broker has joined the cluster but is in recovery mode. It is replicating necessary data and metadata before becoming fully operational. It is not yet serving client requests.
- Running
The broker is fully operational. It is registered with the controller and serving client requests.
- Pending Controlled Shutdown
The broker has initiated a controlled shutdown. It will shut down gracefully after completing in-flight operations.
- Shutting Down
The broker is shutting down. Client connections are closing, and internal resources are being released.
- Unknown
The broker’s state is unknown, possibly due to an unexpected error or failure.
If the broker has a rack ID, it identifies the rack or datacenter in which the broker resides.
Controller status is shown as one of the following, describing the controller’s role within the metadata quorum:
- Quorum leader
The controller is the active leader, coordinating cluster metadata updates and managing operations like partition reassignments and broker registrations.
- Quorum follower
The controller is a follower in the metadata quorum, passively replicating updates from the leader while maintaining a synchronized state. It is ready to take over as the leader if needed.
- Quorum follower lagged
The controller is a follower but has fallen behind the leader. It is not fully up to date with the latest metadata and may be ineligible for leader election until it catches up.
- Unknown
The controller’s state is unknown, possibly due to an unexpected error or failure.
Click on the right arrow (>) next to a node name to view more information about the node, including its hostname and disk usage.
Click on the Rebalance tab to show any rebalances taking place on the cluster.
Note | Consider rebalancing if partition distribution is uneven to ensure efficient resource utilization. |
7.1. Managing rebalances
When you configure KafkaRebalance
resources to generate optimization proposals on a cluster, you can check their status from the Rebalance tab.
The Rebalance tab presents a chronological list of KafkaRebalance
resources from which you can manage the optimization proposals.
You can filter the list by name, status, or rebalance mode.
Note | Cruise Control must be enabled to run alongside the Kafka cluster in order to use the Rebalance tab. For more information on setting up and using Cruise Control to generate proposals, see the Strimzi documentation. |
Log in to the Kafka cluster in the StreamsHub Console, then click Kafka nodes.
Check the information on the Rebalance tab.
For each rebalance, you can view its status and the time it was last updated.
Table 1. Rebalance status descriptions Status Description New
Resource has not been observed by the operator before
PendingProposal
Optimization proposal not generated
ProposalReady
Optimization proposal is ready for approval
Rebalancing
Rebalance in progress
Stopped
Rebalance stopped
NotReady
Error ocurred with the rebalance
Ready
Rebalance complete
ReconciliationPaused
Rebalance is paused
NoteThe status of the KafkaRebalance
resource changes toReconciliationPaused
when thestrimzi.io/pause-reconciliation
annotation is set totrue
in its configuration.Click on the right arrow (>) next to a rebalance name to view more information about the broker, including its rebalance mode, and whether auto-approval is enabled. If the rebalance involved brokers being removed or added, they are also listed.
Optimization proposals can be generated in one of three modes:
full
is the default mode and runs a full rebalance.add-brokers
is the mode used after adding brokers when scaling up a Kafka cluster.remove-brokers
is the mode used before removing brokers when scaling down a Kafka cluster.
If auto-approval is enabled for a proposal, a successfully generated proposal goes straight into a cluster rebalance.
Click on the name of a KafkaRebalance
resource to view a generated optimization proposal.
An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers.
For more information on the properties shown on the proposal and what they mean, see the Strimzi documentation.
Select the options icon (three vertical dots) and click on an option to manage a rebalance.
Click Approve to approve a proposal.
The rebalance outlined in the proposal is performed on the Kafka cluster.Click Refresh to generate a fresh optimization proposal.
If there has been a gap between generating a proposal and approving it, refresh the proposal so that the current state of the cluster is taken into account with a rebalance.Click Stop to stop a rebalance.
Rebalances can take a long time and may impact the performance of your cluster. Stopping a rebalance can help avoid performance issues and allow you to revert changes if needed.
Note | The options available depend on the status of the KafkaBalance resource.
For example, it’s not possible to approve an optimization proposal if it’s not ready. |
8. Consumer Groups page
The Consumer Groups page lists all consumer groups associated with a Kafka cluster. You can filter the list by consumer group name or status.
For each consumer group, you can view its status, the overall consumer lag across all partitions, and the number of members. Click on associated topics to show the topic information available from the Topics page tabs.
Consumer group status can be one of the following:
Stable indicates normal functioning
Rebalancing indicates ongoing adjustments to the consumer group’s members.
Empty suggests no active members. If in the empty state, consider adding members to the group.
Click on a consumer group name to check group members. Select the options icon (three vertical dots) against a consumer group to reset consumer offsets.
8.1. Checking consumer group members
Check the members of a specific consumer group from the Consumer Groups page.
Log in to the Kafka cluster in the StreamsHub Console, then click Consumer Groups.
From the Consumer Groups page, click the name of the consumer group you want to inspect.
Click on the right arrow (>) next to a member ID to view the topic partitions a member is associated with, as well as any possible consumer lag.
For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions.
Any consumer lag for a specific topic partition reflects the gap between the last message a consumer has picked up (committed offset position) and the latest message written by the producer (end offset position).
8.2. Resetting consumer offsets
Reset the consumer offsets of a specific consumer group from the Consumer Groups page.
You might want to do this when reprocessing old data, skipping unwanted messages, or recovering from downtime.
All active members of the consumer group must be shut down before resetting the consumer offsets.
Log in to the Kafka cluster in the StreamsHub Console, then click Consumer Groups.
Click the options icon (three vertical dots) for the consumer group and click the reset consumer offsets option to display the Reset consumer offset page.
Choose to apply the offset reset to all consumer topics associated with the consumer group or select a specific topic.
If you selected a topic, choose to apply the offset reset to all partitions or select a specific partition.
Choose the position to reset the offset:
Custom offset (available only if you selected a specific topic and a specific partition)
If you select this option, enter the custom offset value.Latest offset
Earliest offset
Specific date and time
If you selected date and time, choose the appropriate format and enter the date in that format.
Click Reset to perform the offset reset.
Before actually executing the offset reset, you can use the dry run option to view which offsets would be reset before applying the change.
From the Reset consumer offset page, click the down arrow next to Dry run.
Choose the option to run and show the results in the console.
Or you can copy the dry run command and run it independently against the consumer group.
The results in the console show the new offsets for each topic partition included in the reset operation.
A download option is available for the results.