Skip to content

What's New in HiveMQ 4.30?

by HiveMQ Team

The HiveMQ team is proud to announce the release of HiveMQ Enterprise MQTT Platform 4.30. This release introduces new Data Hub Modules, powerful new Data Hub transformation scripts, an added resilience configuration for the Enterprise Security Extensions SQL realm, and plenty of performance and usability improvements.

Highlights

  • Enhanced Data Hub transformation scripts 
  • New HiveMQ Modules for Data Hub
  • Added resilience configuration for the Enterprise Security Extensions SQL realm

Enhanced Data Hub transformation scripts

We are thrilled to announce new Data Hub transformation scripts with even more powerful features. Previously, scripts were able to transform incoming MQTT messages by adding fields, performing computations, or restructuring the entire payload. With the release of HiveMQ 4.30, you can now create new MQTT messages in a single script to easily generate one or more new payloads from a single message. This new functionality opens up a wide range of use cases you can flexibly execute within Data Hub, including splitting metrics into subtopics, copying and filtering messages, and much more.

How it works

One transformation script can create multiple branches that contain new MQTT messages. Each branch that is created can be handled separately in a data policy.

Suppose there is an input MQTT message containing an array of metrics and each metric should be re-published to a different topic. Consider the following example in a transform function that splits these messages:

function transform(publish, context) {
  const metrics = publish.payload.metrics;

  metrics.forEach( metric => {
    const metricPublish = {
       topic: publish.topic + '/' + metric.metricName,
       payload: metric
    };

    context.branches['branch1'].addPublish(metricPublish);
  });

  return publish;
}

A new message is created with a single metric as a payload and a modified topic name. The message is then added to the branch1 branch, and in conclusion, the original message is returned. 

The Data Hub data policy defines the further handling of the branch and which serializer is used.

Example Data Hub data policy for branch1

{
  "id": "fanout-messages",
  "matching": {
    "topicFilter": "factory/data-points/#"
  },
  "validation": {
    "validators": []
  },
  "onSuccess": {
    "pipeline": [
      {
        "id": "operation-2eng0",
        "functionId": "Serdes.deserialize",
        "arguments": {
          "schemaId": "simple-json",
          "schemaVersion": "latest"
        }
      },
      {
        "id": "fanout-script",
        "functionId": "fn:fanout-messages:latest",
        "arguments": {},
        "onBranch": [
          {
            "branchId": "branch1",
            "pipeline": [
              {
                "id": "serde",
                "functionId": "Serdes.serialize",
                "arguments": {
                  "schemaId": "simple-json",
                  "schemaVersion": "latest"
                }
              }
            ]
          }
        ]
      },
      {
        "id": "serde",
        "functionId": "Serdes.serialize",
        "arguments": {
          "schemaId": "simple-json",
          "schemaVersion": "latest"
        }
      },
      {
        "id": "drop",
        "functionId": "Mqtt.drop",
        "arguments": {}
      }
    ]
  }
}

The fanout-messages function executes the defined script and adds messages to branch1. The onBranch object defines the further execution for all branches. In this example, all messages in branch1 are serialized using the simple-json schema.

For complete configuration details, see addPublish Function in Data Hub Transformations.

How it helps

The new addPublish function in Data Hub helps you customize your in-flight data and ensure that the incoming MQTT payload information is more relevant and actionable after the transformation. 

For example, to easily extract metrics from payloads such as Eclipse Sparkplug and publish them to configurable MQTT topics. 

In a Sparkplug payload, metrics are contained in the metrics field of the payload message. Each metric provides information about a specific process or device state. Maintaining a clear overview of these metrics can help organizations improve operational efficiency and resource allocation and offer real-time data-driven insights: 

  • Get granular visibility of individual components, processes, and their performance.
  • Drive data-driven decision-making with real-time information.
  • Enable predictive maintenance to address potential issues before they happen

Keep an eye out for our upcoming blog post on Data Hub transformation scripts to learn how you can use Data Hub scripts to support additional use cases. 

HiveMQ Modules for Data Hub

HiveMQ 4.30 introduces new HiveMQ Modules for Data Hub, including a highly-requested  Sparkplug Module for Data Hub. 

The HiveMQ Control Center provides easy access to all Data Hub Modules:

HiveMQ Control Center Data Hub Modules
  • hivemq-sparkplug:  The HiveMQ Sparkplug Module for Data Hub helps you migrate from the rigid topic structure of Sparkplug to a more flexible MQTT topic structure without requiring any custom code or complex configuration.
  • hivemq-duplicate-messages: The HiveMQ Duplicate Messages Module for Data Hub identifies consecutive identical client messages to prevent unnecessary resource consumption. For example, save bandwidth and storage costs by dropping duplicate readings from sensors such as a temperature sensor that sends the same value repeatedly
  • hivemq-validate-simple-json: The HiveMQ Validate Simple JSON Module for Data Hub helps you quickly add JSON validation capabilities to your HiveMQ deployment. You can decide whether a message should be logged or an invalid message should be dropped to avoid non-compliant JSON messages.

How it works

From the HiveMQ Control Center, you can create an instance of a module from the Data Hub | Modules overview. Once selected, you can configure the instance as needed and start processing your in-flight data.

Data Hub Module Configuration

For example, with a single click, you can create an instance of the Sparkplug Module that is immediately operational. 

How it helps

HiveMQ Modules for Data Hub, including the Sparkplug module, offer easy-to-use functionality that can be quickly accessed from the HiveMQ Control Center. Customers have the option to define fine-grained and fully flexible Data Hub policies or utilize pre-defined Modules that implement ready-to-use functionalities.

Module functionality includes:

  1. Schema validation: Implement policies that check whether your data sources are sending data in the data format you expect. For example, if data is not Sparkplug compliant, you can define actions to drop the message or disconnect the client.
  2. Sparkplug Protobuf to JSON conversion: Improve system interoperability and simplify data integration across diverse platforms, the Data Hub Sparkplug module converts Sparkplug Protobuf payloads to the user-friendly JSON format.
  3. Metric fan-out: Ensure your data consumers receive relevant and actionable data, Data Hub scripts enable you to extract metrics from Sparkplug payloads and publish them to configurable MQTT topics. 
  4. Flexible topic structure: Bypass level limitations in Sparkplug B by converting Sparkplug delimiters to an easily expandable MQTT topic structure.

All module features can be turned on or off as needed. To learn more about Data Hub Modules and configuration options, visit our documentation.

NOTE: The HiveMQ Modules for Data Hub feature utilizes transformation scripts that are not yet fully supported for Linux Arm64. As a result, you cannot use the Data Hub Modules feature on Linux Arm64.

Added resilience configuration for the Enterprise Security Extensions SQL realm

Starting with HiveMQ 4.30, you can configure advanced resilience with circuit breakers for the connection to the configured SQL databases used for authentication and authorization. Circuit Breakers help to protect your use cases against cascading failures when connecting to the configured databases that store the authentication and authorization details for your use case. 

How it works

You can configure a threshold for failures when connecting to the remote database for the SQL realm. This provides resilience against failing or hanging database queries. The circuit breaker opens based on the connection failure rate. When the circuit breaker is open, connections to the database are prevented. After a configurable retry, the circuit breaker automatically closes based on the connection success rate. A second SQL realm can be configured as a hot standby to provide a fast failover. 

In our example, the circuit breaker stops the connection to the database on the first connection failure that occurs. The configured chain-authentication-manager then forwards the authentication call to the second configured SQL realm which serves as a hot standby.

<enterprise-security-extension>
    <realms>
        <sql-realm>
            <name>main-realm</name>
            <enabled>true</enabled>
            <configuration>
                ...
                <circuit-breaker>
                    <open-state-duration-millis>60000</open-state-duration-millis>
                    <failure-threshold>
                        <count-executions-based>
                            <count>1</count>
                            <sliding-window-executions>1</sliding-window-executions>
                        </count-executions-based>
                    </failure-threshold>
                </circuit-breaker>
            </configuration>
        </sql-realm>
        <sql-realm>
            <name>fallback-realm</name>
            <enabled>true</enabled>
            <configuration>
                ...
            </configuration>
        </sql-realm>
    </realms>
    <pipelines>
        <listener-pipeline listener="ALL">
            <chain-authentication-manager>
                <strategy>
                    <check-next-on-unknown-authentication-key/>
                </strategy>
                <chain>
                    <sql-authentication-manager>
                        <realm>main-realm</realm>
                    </sql-authentication-manager>
                    <sql-authentication-manager>
                        <realm>fallback-realm</realm>
                    </sql-authentication-manager>
                </chain>
            </chain-authentication-manager>
            ...
        </listener-pipeline>
    </pipelines>
</enterprise-security-extension>

How it helps

The added resilience configuration for SQL realms in the Enterprise Security Extension lets you configure a circuit breaker for your authentication and authorization database. Together with a hot-standby SQL realm configuration, the new option adds a higher level of resilience to your authentication and authorization use cases. The use of a circuit breaker can allow easier migration and maintenance of your databases without the need for downtime.

For more information and complete configuration details, see Resilience.

More Noteworthy Features and Improvements

HiveMQ Enterprise MQTT Broker

  • Improved throughput for messages with medium-sized payloads of 1 kB or more.
  • Fixed an issue that could cause an unnecessary error message to be logged when shutting down a node.
  • Fixed an issue that could cause an inaccurate session expiry log statement to print when a client reconnects.

HiveMQ Data Hub

  • Fixed an issue that could prevent the replication of non-UTF-8 encoded schemas.
  • Significantly improved performance for adding schemas, scripts, and policies.

HiveMQ Enterprise Security Extension

  • Improved error message when unlicensed features are used in the HiveMQ Enterprise Security Extension.

HiveMQ Enterprise Extension for Kafka

  • Fixed an issue in the Kafka Dashboard of the HiveMQ Control Center that could generate spurious error log messages during a cluster restart.
  • Fixed an issue that could cause unnecessarily high memory usage when using the Kafka Dashboard of the HiveMQ Control Center.

HiveMQ Enterprise Extension for PostgreSQL

  • Added new retry behavior for insert statements that fail due to database connection errors to avoid possible data loss.
  • Added prepared statement caching to optimize resource utilization.

HiveMQ Enterprise Extension for MySQL

  • Added new retry behavior for insert statements that fail due to database connection errors to avoid possible data loss.
  • Added prepared statement caching to optimize resource utilization.

Get Started Today

To upgrade to HiveMQ 4.30 from a previous HiveMQ version, follow our HiveMQ Upgrade Guide. To learn more about all of the features we offer, explore the HiveMQ User Guide.

HiveMQ Team

The HiveMQ team loves writing about MQTT, Sparkplug, Industrial IoT, protocols, how to deploy our platform, and more. We focus on industries ranging from energy, to transportation and logistics, to automotive manufacturing. Our experts are here to help, contact us with any questions.

HiveMQ logo
Review HiveMQ on G2