Deploying HiveMQ Cluster With Docker
In one of our previous blogs, Deploying HiveMQ With Docker, we discussed how HiveMQ can be used in conjunction with Docker for creating maintainable, scalable and easy-to-use MQTT server deployments. In this blog post, we explore how HiveMQ can be used in conjunction with Docker to create a maintainable and scalable deployment of a HiveMQ cluster with a minimal configuration effort.
Cluster Configuration
To build a HiveMQ cluster, multiple containers running HiveMQ will be started on a Docker host. Thanks to HiveMQ’s dynamic multicast discovery these containers can all be started from the same Docker image without even changing the HiveMQ configuration. This allows you to create and start HiveMQ cluster nodes in a matter of seconds.
When building a HiveMQ cluster on Docker the recommended transport for HiveMQ’s cluster is UDP. The recommended discovery is multicast. This discovery utilizes an IP multicast address to find other HiveMQ cluster nodes in the same network. Since it is a dynamic discovery additional nodes can be added at later point without any modification to the existing configuration.
You can take at look at the HiveMQ Documentation for other possible methods of cluster node discovery or even create your own with HiveMQ’s extensible plugin system.
Cluster Setup
For two (or more) containers to form a cluster, the configuration files need some small modifications. The cluster needs to be enabled and the bind-ports for UDP and TCP failure detection need to be set in HiveMQ’s XML configration file. Also these ports need to be exposed in the Dockerfile.
Dockerfile. ( Replace YOUR-HIVEMQ-DOWNLOAD-LINK
with your personal download link which you get from the HiveMQ Download page)
config.xml
then build and run the image with
Note the different ports and the different folders in the run commands for the containers.
After starting to the containers you can connect to your nodes with the following credentials:
node 1
Host: IP/hostname of your Docker host (e.g. localhost)
Port: 11883
node 2:
Host: IP/hostname of your Docker host (e.g. localhost)
Port: 11884
You can also check the log of the containers to see that they formed a cluster with the following commands:
Adding more nodes
Now this cluster consists of two nodes, if you want more nodes in your HiveMQ cluster you can add additional ones analogous to the already running nodes. Just remember to configure different ports and volume folders for each node.
Due to using a dynamic multicast discovery the nodes will discover each other without any additional configuration which allows you to always scale your cluster as your demand grows.
Networking
In this example the default networking configuration from Docker is used. You could also configure more complex networking between the containers. Even spanning networks between multiple Docker hosts is an option and will be needed if you want to form a HiveMQ cluster over multiple Docker hosts.
You can read more about docker container networking in the Docker Networking Documentation. Or use third-party tools like Pipework or Weave to setup even more complex networking scenarios.
Advanced
Here we started each container manually, if you want to configure multiple containers at once, for example a whole HiveMQ cluster you can take a look at Docker Compose.
Also there are management solutions which are built on top of Docker and allow you to operate and maintain Docker containers at a larger scale. For example docker-swarm, Kubernetes or D2IQ DCOS.
Conclusion
Docker and HiveMQ are a perfect fit when it comes to deploying and maintainig a HiveMQ cluster. This blog post explored how easy it is to setup a full-fledged containerized HiveMQ cluster deployment for development, testing, integration and production purposes.
HiveMQ Team
The HiveMQ team loves writing about MQTT, Sparkplug, Industrial IoT, protocols, how to deploy our platform, and more. We focus on industries ranging from energy, to transportation and logistics, to automotive manufacturing. Our experts are here to help, contact us with any questions.