Skip to content

HiveMQ Platform Operator for Kubernetes 1.5.0 is now available!

by HiveMQ Team

The HiveMQ Team is excited to announce the release of HiveMQ Platform Operator for Kubernetes Operator 1.5.0. This release adds useful configuration options to the HiveMQ Helm charts and provides important bug fixes for the HiveMQ Platform Operator.

IMPORTANT: HiveMQ Platform Operator for Kubernetes 1.4.0 contains a critical issue when VolumeClaimTemplates (VCT) for PersistentVolumesClaims (PVC) are configured without a volumeMode: FileSystem attribute. This can cause running HiveMQ Platform Pods to terminate in specific circumstances. If you run version 1.4.0 and use the VCT and PVC configuration, we recommend updating to version 1.5.0. For more details, see the Critical PersistentVolumeClaims Fix section of this release post.

Highlights

  • New configuration options in the HiveMQ Helm charts.

Additional HiveMQ Helm Chart Configuration Options

Custom Labels for HiveMQ Platform Pods and Services

In Kubernetes, labels are key/value pairs that can be attached to Kubernetes objects such as Pods or Services. Labels identify an object and are frequently used to enable efficient queries, selection, and filtering with tools such as kubectl. You can now configure your custom labels separately for HiveMQ Platform pods and the Service objects you create.

How it works

Add your custom labels to HiveMQ Platform Pods and Services in the values.yaml file of HiveMQ Platform Helm chart:

nodes:
  # Labels added to each HiveMQ Platform Pod
  labels:
    pod-label-key-1: label-value-1
    pod-label-key-2: label-value-2

services:
  # MQTT service
  - type: mqtt
    exposed: true
    containerPort: 1883
    # Custom labels added to the service
    labels:
      service-label-key-1: label-value-1
      service-label-key-2: label-value-2

How it helps

Labels can be used to identify, organize, and select resources in Kubernetes. Setting custom labels for pods helps to organize pods and makes it possible to filter resources based on certain criteria. For example, get all pods with the label label-key-1=label-value-1:

kubectl get pods -l pod-label-key-1=label-value-1

Configurable Kubernetes Service Names

You can now optionally configure the name for each Kubernetes Service that the HiveMQ Platform Helm chart creates. By default, a service name is generated with the following pattern: hivemq-<release-name>-<service-type>-<service-port> for example resulting in a service name hivemq-release1-mqtt-1883. Customized service names can be helpful in large projects that require many services and roles.

NOTE: Kubernetes Service names must be unique for Services in the same namespace.

How it works

Set your customized Service name in the Service configuration:

- type: mqtt
  name: "my-custom-mqtt-service-name"
  exposed: false
  containerPort: 1883

New Metric Configuration Options

Starting with version 1.5.0, the HiveMQ Platform Helm chart offers configuration options for the Prometheus extension that monitors HiveMQ metrics.

How it works

You can now disable the Prometheus extension, configure the port on which the Prometheus extension provides metrics, and specify the path on which metrics are available. These options are useful for configuring Prometheus ServiceMonitors that define the scrape targets for the Prometheus Service.

# Metrics configuration options for the HiveMQ Prometheus extension.
metrics:
  enabled: true
  port: 9399
  path: /

How it helps

It is now possible to disable the Prometheus extension if a different monitoring solution is chosen. In addition, the port and path can now be customized to accommodate the Prometheus ServiceMonitor configurations. For more information and configuration details, see Monitoring with Prometheus.

Additional Features and Improvements

HiveMQ Platform Operator for Kubernetes Helm charts

  • Added support for custom labels for HiveMQ Platform Pods and Services.
  • Added support for custom names for HiveMQ Platform Services.

HiveMQ Platform Operator for Kubernetes

  • Fixed an issue in which a rolling restart was not triggered as expected when a VolumeMount was removed.
  • Adjusted how changes to resource limits and requests of the hivemq-platform-operator-init container are handled to prevent unnecessary rolling restarts.
  • Fixed an issue to ensure custom annotations and labels defined in the custom resource are consistently added to all managed Kubernetes resources.
  • Added a configuration option to specify whether changes to the Pod template metadata trigger a rolling restart.
  • Switched StatefulSet reconciliation to server-side-apply to reduce the occurrence of KubernetesClientException notifications with status code 409.
  • Fixed a critical issue where a change of VolumeClaimTemplates in the StatefulSet specification for PersistentVolumeClaims could lead to the termination of all running Platform Pods. For more information, see Critical PersistentVolumeClaims Fix.

Critical PersistentVolumeClaims (PVC) Fix

HiveMQ Platform Operator for Kubernetes Operator 1.4.0 contains a critical issue when VolumeClaimTemplates (VCT) for PersistentVolumesClaims are configured without a volumeMode: FileSystem attribute. This can cause running HiveMQ Platform Pods to terminate.

Background Information

Kubernetes StatefulSets manage PersistentVolumeClaims (PVC) that are defined in the VolumeClaimTemplates (VCT) section of the StatefulSet specification.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: example
spec:
  ...
  volumeClaimTemplates:
    - metadata:
        name: my-storage
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: my-storage-class
        volumeMode: Filesystem

The VCT section of the StatefulSet is immutable, so changes to PVCs are not supported in Kubernetes once the StatefulSet is created.

A common workaround for changing immutable fields of a StatefulSet is to delete the StatefulSet without deleting dependent resources. A new matching StatefulSet is then created that contains the required changes. The new StatefulSet continues to reconcile the running Pods.

The HiveMQ Platform Operator uses this methodology to apply changes as well. With the HiveMQ Platform Operator 1.4.0 and a specific configuration of VCT, two separate issues can arise that, in combination, can result in the termination of running HiveMQ Platform Pods.

The issue

Kubernetes sometimes modifies a resource that is reconciled by an operator adding default configuration values to the resource spec. The modified version of the resource does not match the desired state and the operator will attempt to reconcile the resource again. This leads to a fast and constant update loop of the resource. In the case of the StatefulSet object, the mismatch results in a constant rolling restart of the HiveMQ Platform.

The default volumeMode field of a PVC configuration can create such an update loop. If the volumeMode is not configured, Kubernetes adds the default volumeMode: Filesystem attribute value to the PVC. Since PVCs are immutable, the resulting update loop also triggers the re-creation of the StatefulSet using the orphaned deletion. Moreover, if the deletion and re-creation of the StatefulSet are executed too quickly and too often, the managed Pods of the StatefulSet are eventually terminated.

The workaround

An immediate workaround to avoid the infinite update loop is to define the default field volumeMode: Filesystem in all your PVCs. When the field is defined, there should be no mismatch between the desired and actual state.

The solution

In HiveMQ Platform Operator 1.5.0, we no longer re-create the StatefulSet when PVC changes occur. Instead, we detect the changes and enter a new error state INVALID_VOLUME_CLAIM_TEMPLATES with a clear error message: VolumeClaimTemplates in StatefulSet specification have been modified, please revert to the previous state. This stops the reconciliation process so the immutable changes to the StatefulSet are not rolled out. The operator automatically continues its normal operation once the custom resource is updated to the previous state of the PVCs (other changes can of course be applied).

NOTE: The HiveMQ Platform Operator for Kubernetes 1.5.1 maintenance release (August 9, 2024) fixes an issue that can block reconciliation when updating from HiveMQ Platform Operator version 1.3.1 or older. For all the details, see the HiveMQ Platform Operator for Kubernetes 1.5.1 maintenance release post.

Get Started Today

To get started with the new HiveMQ Platform Operator, see our HiveMQ Platform Operator Quick Start Guide.

To update from a previous version of the HiveMQ Platform Operator for Kubernetes, you need to update your HiveMQ Platform custom resource definition (CRD). For step-by-step instructions, see our Upgrade Guide.

To learn more about our new operator, see HiveMQ Platform Operator for Kubernetes.

HiveMQ Team

The HiveMQ team loves writing about MQTT, Sparkplug, Industrial IoT, protocols, how to deploy our platform, and more. We focus on industries ranging from energy, to transportation and logistics, to automotive manufacturing. Our experts are here to help, contact us with any questions.

HiveMQ logo
Review HiveMQ on G2