Skip to content

Deploy HiveMQ MQTT Broker with Amazon Elastic Container Service (ECS) Anywhere

by Anthony Olazabal
16 min read

Deploying a HiveMQ Enterprise broker with Amazon Elastic Container Service (ECS) Anywhere opens up powerful possibilities for managing and scaling IoT infrastructures. With the growing demand for reliable and scalable MQTT brokers, HiveMQ's enterprise-grade solution offers the robust performance and advanced features necessary for today's IoT ecosystems. By leveraging Amazon ECS Anywhere, you can deploy and manage your HiveMQ brokers beyond the cloud, extending into on-premises environments and edge locations with the same ease and consistency as cloud-native deployments.

In this guide, we will walk you through the step-by-step process of deploying HiveMQ Enterprise MQTT broker using Amazon ECS Anywhere, enabling you to harness the full potential of your IoT applications across diverse environments. Whether you're new to ECS Anywhere or a seasoned cloud professional, this tutorial will provide you with practical insights and best practices to ensure a seamless and efficient deployment. Let's dive in!

Architecture Overview of Deploying MQTT Broker with Amazon ECS Anywhere 

Architecture Overview of Deploying MQTT Broker with Amazon ECS Anywhere

The global architecture is pretty simple. We try to avoid at maximum dependencies with external resources outside of the on-premises / edge site. So we put in place the following components: 

  • A Minio S3 storage (in this sample architecture it’s a single node, but you can build a local cluster shared between the ECS Instances)

  • The HiveMQ Enterprise cluster with the S3 Cluster discovery extension that uses the Minio S3 storage as a repository to store the state and members of the cluster

  • A custom image of HiveMQ Enterprise broker that includes our extensions and configurations

  • Traefik stateless containers to load balance the HTTP and MQTT traffic between nodes.

  • Docker desktop installed on your local machine to build a custom docker image

Prerequisites

To start the configuration we assume that you have the following already installed:

  • An AWS Account with enough permissions to create ressources

  • The AWS CLI

  • An Amazon Elastic Container Service Cluster already configured with at least two external instances. If needed, follow the instructions as described here in the Amazon ECS product documentation.

  • Knowledge on AWS IAM management

Configure Amazon IAM 

To anticipate the permission needs for Traefik to read information from the services deployed on the External Instances of ECS, we prepare a permission policy with the following properties: 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TraefikECSReadAccess",
            "Effect": "Allow",
            "Action": [
                "ecs:ListClusters",
                "ecs:DescribeClusters",
                "ecs:ListTasks",
                "ecs:DescribeTasks",
                "ecs:DescribeContainerInstances",
                "ecs:DescribeTaskDefinition",
                "ec2:DescribeInstances",
                "ssm:DescribeInstanceInformation"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Configure Amazon IAMClick Next, in the next creation form page, name the policy, and add a description. Then click Create policy.

For the last step on Amazon IAM, you create a role in the Console like below: 

Create a new role selecting the Elastic Container Service Task as use case.

Create a new role selecting the Elastic Container Service Task as use caseThen select the Permissions policy previously created “TreafikECSReadAccess.” 

Permissions policyOnce created, note down the ARN of the role to be able to update the Task deployment template later in the configuration. 

For example, here we have the ARN: arn:aws:iam::5623627372734:role/ECSTasksTraefik

Create Minio Deployment Task Template

On the AWS Console, go to Amazon Elastic Container Service in the Task definitions menu and click on “Create new task definition with JSON.

Creating new task definition on Amazon ECS Anywhere serviceCopy and paste the definitions to prepare the deployment task.

{
    "family": "minio",
    "containerDefinitions": [
        {
            "name": "minio",
            "image": "quay.io/minio/minio",
            "cpu": 256,
            "memory": 256,
            "portMappings": [
                {
                    "containerPort": 9000,
                    "hostPort": 9000,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 9001,
                    "hostPort": 9001,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "entryPoint": [
                "minio"
            ],
            "command": [
                "server",
                "/tmp"
            ],
            "environment": [
                {
                    "name": "MINIO_ROOT_PASSWORD",
                    "value": "minio123"
                },
                {
                    "name": "MINIO_ADDRESS",
                    "value": ":9000"
                },
                {
                    "name": "MINIO_CONSOLE_ADDRESS",
                    "value": ":9001"
                },
                {
                    "name": "MINIO_ROOT_USER",
                    "value": "minio"
                },
                {
                    "name": "MINIO_VOLUMES",
                    "value": "/tmp"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "hostname": "minio",
            "systemControls": []
        }
    ],
    "networkMode": "host",
    "requiresCompatibilities": [
        "EXTERNAL"
    ]
}

Change the value for the following properties to reflect your configuration: 

  • MINIO_ROOT_PASSWORD

  • MINIO_ROOT_USER

  • MINIO_VOLUMES

Deploy Minio

Now that our templates are ready, we can create the services on our ECS Cluster.

To do so, go to the AWS Console in Amazon Elastic Container Service, your cluster, and click on Create in the Services section. 

Deploy Minio on Amazon ECS AnywhereOn the creation form, select the External launch type:

Deploy Minio on Amazon ECS AnywhereIn the deployment configuration section, name the task and select the task definition for Minio that we’ve previously created: 

Deploy Minio on Amazon ECS AnywhereClick Create and wait for the deployment to complete. 

Deploy Minio on Amazon ECS Anywhere

Configure Minio

In order to use the S3 Bucket for our cluster, we need to set up the Minio instance. 

Access your local console with http://<service Uri>:9001

Configure MinioLog in with your defined credentials

Once in, create a new bucket in the object browser. 

Create bucket on Minio Object StoreTake note of the Bucket Name. 

The last step is to configure an access key in the dedicated menu “Access Keys.”

Create Access Key on Minio Object StoreKeep the Access Key and the Secret Key in your notes for the configuration of the S3 discovery extension.

Build a HiveMQ Enterprise MQTT Broker Custom Image

In order to deploy the cluster seamlessly, we need to create a custom image of HiveMQ Enterprise MQTT broker to embed the S3 extension and the configuration files.

We will start by creating our configuration files: 

config.xml 

<?xml version="1.0"?>
<hivemq xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="config.xsd">

    <listeners>
        <tcp-listener>
            <port>1883</port>
            <bind-address>0.0.0.0</bind-address>
        </tcp-listener>
    </listeners>
    <control-center>
        <listeners>
            <http>
                <port>8080</port>
                <bind-address>0.0.0.0</bind-address>
            </http>
        </listeners>
        <users>
            <user>
                <name>${HIVEMQ_CONTROL_CENTER_USER}</name>
                <password>${HIVEMQ_CONTROL_CENTER_PASSWORD}</password>
            </user>
        </users>
    </control-center>
    <!--REST-API-CONFIGURATION-->

    <cluster>
        <transport>
            <!--TRANSPORT_TYPE-->
        </transport>
        <enabled>true</enabled>
        <discovery>
            <extension/>
        </discovery>
    </cluster>
</hivemq>

s3discovery.properties

credentials-type:access_key
credentials-access-key-id:hivemq
credentials-secret-access-key:HiveMQ@2024
s3-path-style-access:true
s3-bucket-name:hivemq
s3-bucket-region:eu-north-1
s3-endpoint:http://<service Uri>:9000
s3-endpoint-region:eu-north-1
file-prefix:hivemq/cluster/nodes/
file-expiration:360
update-interval:180

In this file, you need to adjust the credentials-access-key-id, credentials-secret-access-key and the s3-endpoint to your S3 storage with the values defined in the Minio S3 configuration.

We will also embed in our image a “startup script" that will update our broker configuration file with the local IP address of the container defined as local cluster transport endpoint. 

pre-entry.sh

#!/usr/bin/env bash

#Set Cluster Transport Type
if [[ "${HIVEMQ_CLUSTER_TRANSPORT_TYPE}" == "UDP" ]]; then
    # shellcheck disable=SC2016
    sed -i -e 's|<\!--TRANSPORT_TYPE-->|<udp><bind-address>0.0.0.0</bind-address><bind-port>8000</bind-port><!-- disable multicast to avoid accidental cluster forming --><multicast-enabled>false</multicast-enabled></udp>|' /opt/hivemq/conf/config.xml
elif [[ "${HIVEMQ_CLUSTER_TRANSPORT_TYPE}" == "TCP" ]]; then
    # shellcheck disable=SC2016
    sed -i -e 's|<\!--TRANSPORT_TYPE-->|<tcp><bind-address>0.0.0.0</bind-address><bind-port>8000</bind-port></tcp>|' /opt/hivemq/conf/config.xml
fi

We then use the following Dockerfile to build our custom image:

ARG BASEIMAGE=hivemq/hivemq4:4.31.0

FROM ${BASEIMAGE}

ARG S3_CLUSTER_EXTENSION_VERSION=4.2.0

COPY config.xml /opt/hivemq/conf/config.xml
COPY pre-entry.sh /opt/pre-entry.sh
COPY s3discovery.properties /opt/s3discovery.properties

USER root
RUN curl -L https://github.com/hivemq/hivemq-s3-cluster-discovery-extension/releases/download/${S3_CLUSTER_EXTENSION_VERSION}/hivemq-s3-cluster-discovery-extension-${S3_CLUSTER_EXTENSION_VERSION}.zip -o /opt/hivemq/extensions/s3-cluster.zip \
    && apt update \
    && apt install unzip \
    && apt clean \
    && unzip /opt/hivemq/extensions/s3-cluster.zip -d /opt/hivemq/extensions \
    && chgrp -R 0 /opt/hivemq/extensions/hivemq-s3-cluster-discovery-extension \
    && chgrp -R 0 /opt/s3discovery.properties \
    && chmod -R 770 /opt/hivemq/extensions/hivemq-s3-cluster-discovery-extension \
    && chmod -R 770 /opt/s3discovery.properties \
    && mv -f /opt/s3discovery.properties /opt/hivemq/extensions/hivemq-s3-cluster-discovery-extension/s3discovery.properties \
    && rm /opt/hivemq/extensions/s3-cluster.zip \
    && chmod +x /opt/pre-entry.sh \
    && ln -s /opt/pre-entry.sh /docker-entrypoint.d/90_Customizations.sh

You can update the HiveMQ image version and S3 Discovery extension to align to the latest one when you build the image.

Build the image with the following command: 

docker build -t hivemq4:4.31.0-custom . --platform linux/amd64

Note: Following the build, you need to deploy the custom image in a repository that will be accessible from your ESC Instances. This could be for tests, docker hub, or other services like Amazon Container Registry in production.

Create HiveMQ Deployment Task Template

On the AWS Console, go to Amazon Elastic Container Service in the Task definitions menu and click on “Create new task definition with JSON”.

Creating new task definition on Amazon ECS Anywhere serviceCopy and paste the definitions to prepare the deployment task.

{
    "family": "hivemq",
    "containerDefinitions": [
        {
            "name": "hivemq",
            "image": "link to your image repository",
            "cpu": 512,
            "memory": 512,
            "portMappings": [
                {
                    "containerPort": 8080,
                    "hostPort": 8080,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 1883,
                    "hostPort": 1883,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 8000,
                    "hostPort": 8000,
                    "protocol": "udp"
                },
                {
                    "containerPort": 8000,
                    "hostPort": 8000,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 7800,
                    "hostPort": 7800,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "entryPoint": [
                "/opt/docker-entrypoint.sh"
            ],
            "command": [
                "/opt/hivemq/bin/run.sh"
            ],
            "environment": [
                {
                    "name": "HIVEMQ_BIND_ADDRESS",
                    "value": "0.0.0.0"
                },
                {
                    "name": "HIVEMQ_CONTROL_CENTER_PASSWORD",
                    "value": "a68fc32fc49fc4d04c63724a1f6d0c90442209c46dba6975774cde5e5149caf8"
                },
                {
                    "name": "HIVEMQ_CLUSTER_PORT",
                    "value": "8000"
                },
                {
                    "name": "HIVEMQ_CLUSTER_TRANSPORT_TYPE",
                    "value": "TCP"
                },
                {
                    "name": "HIVEMQ_CONTROL_CENTER_USER",
                    "value": "admin"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "hostname": "hivemq-cluster",
            "dockerLabels": {
                "traefik.http.services.hivemq-cc.loadbalancer.server.port": "8080",
                "traefik.tcp.routers.router-broker-mqtt.entrypoints": "mqtt",
                "traefik.tcp.routers.router-broker-mqtt.service": "service-broker-mqtt",
                "traefik.http.routers.hivemq-cc.entrypoints": "web",
                "traefik.tcp.services.service-broker-mqtt.loadbalancer.server.port": "1883",
                "traefik.http.routers.hivemq-cc.rule": "Host(`cc.hivemq.local`) && PathPrefix(`/`)",
                "traefik.tcp.routers.router-broker-mqtt.rule": "HostSNI(`*`)",
                "traefik.http.services.hivemq-cc.loadBalancer.sticky.cookie.name": "hivemqcc",
                "traefik.http.services.hivemq-cc.loadBalancer.sticky.cookie": "true"
            },
            "systemControls": []
        }
    ],
    "networkMode": "host",
    "requiresCompatibilities": [
        "EXTERNAL"
    ]
}

Change the following values to reflect your configuration: 

  • Image link

  • HIVEMQ_CONTROL_CENTER_PASSWORD

  • HIVEMQ_CONTROL_CENTER_USER

In the docker labels, you will need to align the names with your desired Traefik configuration. In our case, we have Traefik configuration with a TCP entrypoint called MQTT and an HTTP entrypoint called WEB.

On the container deployment, we declare an HTTP service called hivemq-cc with the following properties:

"traefik.http.services.hivemq-cc.loadbalancer.server.port": "8080", 

"traefik.http.routers.hivemq-cc.entrypoints": "web",

"traefik.http.routers.hivemq-cc.rule": "Host(`cc.hivemq.local`) && PathPrefix(`/`)",

"traefik.http.services.hivemq-cc.loadBalancer.sticky.cookie.name": "hivemqcc",

"traefik.http.services.hivemq-cc.loadBalancer.sticky.cookie": "true"

We declare a second service called router-broker-mqtt relying on TCP with the following properties:

 "traefik.tcp.routers.router-broker-mqtt.entrypoints": "mqtt",

"traefik.tcp.routers.router-broker-mqtt.service": "service-broker-mqtt",

"traefik.tcp.services.service-broker-mqtt.loadbalancer.server.port": "1883",

"traefik.tcp.routers.router-broker-mqtt.rule": "HostSNI(`*`)",

Deploy HiveMQ

Go to the AWS Console in Amazon Elastic Container Service, your cluster, and click on Create in the Services section. 

Deploy Minio on Amazon ECS AnywhereOn the creation form, select the External launch type:

Deploy Minio on Amazon ECS AnywhereIn the deployment configuration section, name the task, select the task definition for HiveMQ, and define the number of tasks you want to deploy: 

Deploy HiveMQClick Create and wait for the deployment to complete. 

Deploy HiveMQ on Amazon ECS AnywhereIf everything went well, you should be able to access the Control Center via http://<ip of local ECS instance where hivemq is deployed>:8080 and see two nodes in your control center:

Deploy HiveMQ on Amazon ECS AnywhereLet’s move to the final step to deploy Traefik to balance the HTTP traffic for the Control Center and also the MQTT traffic.

Create Traefik Deployment Task Template

On the AWS Console, go to Amazon Elastic Container Service in the Task definitions menu and click on “Create new task definition with JSON”.

Creating new task definition on Amazon ECS Anywhere serviceCopy and paste the definitions to prepare the deployment task.

{
    "family": "LoadBalancer",
    "containerDefinitions": [
        {
            "name": "traefik",
            "image": "traefik:latest",
            "cpu": 0,
            "portMappings": [
                {
                    "containerPort": 8880,
                    "hostPort": 8880,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 8080,
                    "hostPort": 8888,
                    "protocol": "tcp"
                },
                {
                    "containerPort": 1884,
                    "hostPort": 1884,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "command": [
                "--api.dashboard=true",
                "--api.insecure=true",
                "--entryPoints.web.address=:8880",
                "--entryPoints.mqtt.address=:1884",
                "--accesslog=true",
                "--providers.ecs.ecsAnywhere=true",
                "--providers.ecs.region=ca-central-1",
                "--providers.ecs.autoDiscoverClusters=true",
                "--providers.ecs.exposedByDefault=true"
            ],
            "environment": [],
            "mountPoints": [],
            "volumesFrom": [],
            "systemControls": []
        }
    ],
    "taskRoleArn": "arn:aws:iam::5623627372734:role/ECSTasksTraefik",
    "requiresCompatibilities": [
        "EXTERNAL"
    ],
    "cpu": "256",
    "memory": "128"
}

Change the following values to reflect your configuration: 

  • --providers.ecs.region

  • taskRoleArn (This is where you paste the value from the IAM configuration done earlier)

Note: The configured MQTT port is 1884 because the 1883 on the host (ECS External instance) is already used by the HiveMQ nodes.

Deploy Traefik

Go to the AWS Console in Amazon Elastic Container Service, your cluster, and click on Create in the Services section. 

Deploy Minio on Amazon ECS AnywhereOn the creation form, select the External launch type:

Deploy Minio on Amazon ECS AnywhereIn the deployment configuration section, name the task, select the task definition for Traefik (LoadBalancer) and define the number of tasks you want to deploy: 

Click Create and wait for the deployment to complete. 

Deploy TraefikOnce the task has been completed, you should be able to access the Traefik dashboard via http://<ip of ECS instance where Traefik is deployed>:8888/dashboard/#.

Deploy TraefikYou can see that our services defined by docker labels are automatically detected.

The same happens for TCP services for MQTT connections. 

Deploy TraefikOur backend is also dynamically detected: 

Wrap Up

Deploying the HiveMQ Enterprise MQTT broker with Amazon Elastic Container Service Anywhere (ECS A) empowers your IoT infrastructure with unmatched flexibility, scalability, and control. By following the steps outlined in this guide, you've taken a significant leap toward optimizing your MQTT deployments across cloud, on-premises, and edge environments. 

With ECS Anywhere, you can now manage your HiveMQ brokers consistently, regardless of location, ensuring robust performance and reliability for your IoT applications. This deployment approach not only simplifies operations but also enhances the ability to respond to evolving business needs with agility and precision.

As always in our labs, configurations are simplified to get straight to the point. You'll notice, for example, that Minio's deployment can be enhanced to automatically define a default bucket and access keys. You can also customize your HiveMQ image by integrating other extensions and configurations to connect even more services.

If you're starting your IoT/IIoT project with MQTT and Amazon Web Services, consider giving HiveMQ a try or request a demo.

Request Demo

Anthony Olazabal

Anthony is part of the Solutions Engineering team at HiveMQ. He is a technology enthusiast with many years of experience working in infrastructures and development around Azure cloud architectures. His expertise extends to development, cloud technologies, and a keen interest in IaaS, PaaS, and SaaS services with a keen interest in writing about MQTT and IoT.

  • Contact Anthony Olazabal via e-mail
HiveMQ logo
Review HiveMQ on G2