Skip to content

What’s New in HiveMQ 4.16?

by HiveMQ Team

The HiveMQ team is proud to announce the release of HiveMQ Enterprise MQTT Platform 4.16. This release focuses on enhanced security features for the HiveMQ Enterprise Security Extension, such as file authentication and universal preprocessors. Moreover, this release also includes significant features for our private beta of the HiveMQ Data Governance Hub, such as observability features, schema versioning, and support for the MQTT CLI.

Highlights

  • Additional HiveMQ Enterprise Security Extension enhancements
  • Observability features and schema versioning for HiveMQ Data Governance Hub
  • MQTT CLI commands for HiveMQ Data Governance Hub

Support for File Authentication in HiveMQ Enterprise Security Extension

The HiveMQ Enterprise Security Extension now supports file-based authentication for MQTT clients and Control Center users. Combined with the already existing file-based authorization, this allows the possibility to configure full standalone pipelines without the burden of external dependencies like a SQL Database or an OAuth provider.

How it works

This release introduces the new File Authentication Manager and extends the existing File realm for file-based authorization with additional authentication capabilities.

The File Authentication Manager can be used as a drop-in replacement for the SQL Authentication Manager if the additional flexibility of a SQL Database is not needed.

Example for a user section inside the File Realm filee:

<user>
    <name>admin-user</name>
    <password encoding="Base64">iYwiemNkYxaa5mVEMl36hRjBG5IeXuy652uehvL9lJM=</password>
    <iterations>10</iterations>
    <salt encoding="Base64">bXktc2FsdA==</salt>
    <algorithm>PKCS5S2</algorithm>
    <roles>
        <role>admin-role</role>
    </roles>
</user>

Example of a File Authentication Manager pipeline configuration:

<file-realm>
    <name>my-file-realm</name>
    <configuration>
        <file-path>my-file-realm-file.xml</file-path>
    </configuration>
</file-realm>
...
<listener-pipeline listener="ALL">
    ...
    <file-authentication-manager>
        <realm>file-realm</realm>
    </file-authentication-manager>
    ...
</listener-pipeline>

How it helps

The file-based authentication (and authorization) is a quick and easy way to set up the HiveMQ Enterprise Security Extension to secure MQTT clients and/or the Control Center without the steep learning curve of dealing with a full-blown DBMS or an OAuth provider.

It can also serve enterprise use cases by handling hashed passwords and providing dynamic reloading.

Support for Preprocessors on All HiveMQ Enterprise Security Extension Pipelines

The HiveMQ Enterprise Security Extension now incorporates the ability to specify authentication and authorization preprocessors for all the pipelines, including the Control Center pipeline, Control Center redirect pipeline, and REST API pipeline.

How it works

The HiveMQ Enterprise Security Extension provides authentication and authorization preprocessors that help you customize the authentication and authorization through your pipelines.

<rest-api-pipeline>
    <authentication-preprocessors>
        ...
    </authentication-preprocessors>
    ...
    <authorization-preprocessors>
        ...
    </authorization-preprocessors>
    ...

How it helps

Now you can include authentication and authorization preprocessing for all your pipelines to fill the authentication variables that the next stage of the pipeline requires to authenticate the client or user.

Observability for HiveMQ Data Governance Hub

In our previous release, we introduced our brand new HiveMQ Data Governance Hub in private beta. After starting the journey with data validation, we now offer additional functionality to bring more observability into data pipelines.

  • Metrics.Counter.increment lets you build a custom metric to monitor your data pipeline.
  • UserProperties.add allows you to add user properties to MQTT messages to enrich your data pipeline.

How it works

The new function Metrics.Counter.increment can be used to create metrics per policy to gain observability in the data pipeline. Consider the following snippet taken from a policy:

"onSuccess": {
    "pipeline": [
        {
            "id": "incrementMyCounter",
            "functionId": "Metrics.Counter.increment",
            "arguments": {
                "metricName": "my-policy.succeeded",
                "incrementBy": 1
            }
        },
        {
            "id": "flagSchemaChecked",
            "functionId": "UserProperties.add",
            "arguments": {
                "name": "schema",
                "value": "success"
            }
        }
    ]
}

The new policy-defined metric is created and incremented by 1 every time an MQTT message has passed the policy. The monitoring techniques of HiveMQ Enterprise integrate these custom metrics and therefore integrate with your existing monitoring architecture.

Moreover, the policy demonstrates how to add custom MQTT User Properties to each MQTT message. For example, you can flag good or bad actors by adding the property “schema”: “success”. Check out the example in our policy cookbook repository.

How it helps

IoT applications have many data producers publishing MQTT messages and often agree on the same schema. In rare cases, the MQTT messages do not adhere to the defined schema. To quantify or flag these violations, we’ve introduce two new functions to our policy engine: Metrics.Counter.increment and UserProperties.add.

Schema Versioning for HiveMQ Data Governance Hub

Data producers may have different schemas over time, either by updating their capabilities or fixing errors. On the other hand, data consumers expect that schemas have been negotiated with data producers. This release introduces a new feature for adding multiple schemas to the same schema.

How it works

Each schema version is referenced by the version number returned by the Rest API during creation. A policy can make use of it by referencing either a specific schema version or by using the “latest” string, which means that the latest schema version is automatically used.

How it helps

By having a single policy in place for a specific topic filter, users can now validate multiple schema versions for one topic filter. Data producers may get updates to support new features or fix bugs. A new version often comes with updated data schemas, which can now be handled in the policy engine.

MQTT CLI commands for HiveMQ Data Governance Hub

The platform release 4.16 introduces new commands to our broadly used MQTT CLI tool to interact with our HiveMQ Data Governance Hub. Policies and schemas can now be easily managed using these new commands. For more information, have a look at our publicly available GitHub repository.

How it works

If you have a Protobuf schema description on your local disc and the HiveMQ Data Governance Hub is up and running, then the following command creates a new schema using the MQTT CLI:

mqtt hivemq schemas create --id temperature-schema --type protobuf --file temperature.desc --message-type Temperature

The command creates a new Protobuf schema with the id temperature-schema and defines that the message type Temperature is used. Refer to our Quick Start Guide to compile a Protobuf message into a description file.

How it helps

The new commands make it much easier to work with policies and schemas by interacting with the Rest API of HiveMQ Enterprise rather than using custom cURL commands.

More Noteworthy Features and Improvements

HiveMQ Enterprise MQTT Broker

  • Introduced numerous additional system metrics to provide increased observability and deeper insights.
  • Fixed a race condition that could interfere with cluster state lookups and potentially cause delays in the application of cluster topology changes.
  • Fixed a race condition that could prevent correct adjustment of the overload protection levels when a node leaves the cluster.
  • Fixed a race condition that could cause an overloaded or temporarily sluggish node to be removed from the cluster unnecessarily.
  • Fixed reliability issues that arise when the process of a new node joining a cluster overlaps with the shutdown process of another node.
  • Fixed an issue that could cause queued packets to be incorrectly marked as expired in some rare cases.
  • Fixed an issue when generating a large diagnostic archive for larger deployments.
  • Fixed an issue that some Rest API error responses were not in the expected JSON format.

HiveMQ Enterprise Extension for Kafka

  • Added a limit to restrict the number of warning log messages the extension generates when a transformer fails to forward a message.

HiveMQ Enterprise Security Extension

  • Fixed a wrong foreign key constraint in the rest_api_user_permissions table in the MySQL DDL script mysql_create.sql.

HiveMQ Data Governance Hub (Closed Beta)

  • Added memory limits to data validation policies and schemas to protect against possible OutOfMemory errors.
  • Added automatic validation of schema identifiers to ensure consistency and simplify integration with third-party tools.
  • Added automatic validation of policy identifiers to ensure consistency and simplify integration with third-party tools.
  • Added automatic validation of function arguments to facilitate accurate policy creation.
  • We changed the format used in the string interpolation from $variable to ${variable} for more consistency.
  • Accepted length of the policy and schema identifier limited to 1024 characters.
  • Fixed a NullPointerException when deleting a policy while publishing messages in certain cases.
  • Added stateful start capability for policies and schemas.
  • Added a new topic parameter for the data validation policies endpoint that makes it possible to return only the policies that are applied to a specific topic.
  • Added the ability to request multiple schemas at once via REST API.
  • Added pagination to schema and policy list requests for the REST API.
  • Policies that contain unknown variables are now rejected during creation with an error message.
  • Policy functions were renamed: toDelivery.redirectTo, logSystem.log.

HiveMQ Swarm

  • Added a metric to HiveMQ Swarm's Commander to monitor the progress of the current scenario.
  • Added a metric to HiveMQ Swarm's Commander to monitor the amount of connected Agents.

MQTT CLI

  • Added support for JKS certificate containers to establish a TLS connection.
  • Added support for PKCS#12 certificate containers to establish a TLS connection.
  • Added commands to manage policies and schemas in HiveMQ Data Governance Hub.

Get Started Today

To upgrade to HiveMQ 4.16 from a previous HiveMQ version, take a look at our HiveMQ Upgrade Guide. To learn more about all the features we offer, explore the HiveMQ User Guide.

HiveMQ Team

The HiveMQ team loves writing about MQTT, Sparkplug, Industrial IoT, protocols, how to deploy our platform, and more. We focus on industries ranging from energy, to transportation and logistics, to automotive manufacturing. Our experts are here to help, contact us with any questions.

HiveMQ logo
Review HiveMQ on G2