Skip to content

MQTT Standards for Integrating Edge AI Systems

60 minutes

Watch the Webinar

Webinar Overview

As industrial companies adopt MQTT as a key part of their digital transformation strategy, many are looking to shift AI data processing closer to production systems on the factory-floor — known as Edge AI. Yet, until now there has been no clear standard for integrating AI-generated predictions and insights from data sources into MQTT-backed environments.

In this webinar, we’ll present standards for the reliable and adaptable integration of Edge AI with MQTT frameworks, including the Flat MQTT and Sparkplug B specification. Join Kudzai Manditereza and Magnus McCune of HiveMQ, along with Marc Pous of Balena for this forward-looking webinar.

  • Learn why the MQTT standard for Edge AI was developed, the common patterns, and how they work

  • View a live demonstration of the “unstructured data” and “fully integrated” patterns for Edge AI data integration with MQTT

Don't miss this opportunity to gain valuable insights into optimizing your Edge AI deployments with MQTT standards.

Key Takeaways

  • The webinar covered the work that HiveMQ conducted on an open-source project aimed at providing interoperability and flexibility when integrating Edge AI systems using MQTT. Key motivations included enabling interoperability across siloed systems, cost savings by avoiding custom data formats, and scalability to plug systems in/out easily.

  • Three common patterns for integrating Edge AI with MQTT were identified:

    • Fully-Integrated Pattern: AI model uses MQTT for input data and publishing inferences

    • Unstructured Data Pattern: AI model consumes input data directly from sensors/protocols, publishes inferences to MQTT 

    • Ambassador Pattern: AI model consumes MQTT input data, publishes inferences using other protocols

  • Guidelines were defined for MQTT topic namespaces to separate raw data, inferences, and insights

    • Raw Data Namespace for sensor data inputs

    • Inference Namespace for AI model predictions 

    • Insight Namespace for high-level business insights derived from inferences

  • Demo 1 (Marc Pous) showed the Fully-Integrated Pattern:

    • Used balena stack to manage IoT devices 

    • Raspberry Pis sent camera images to MQTT broker

    • NVIDIA Jetson device subscribed to images, ran object detection, and published inferences back to MQTT

  • Demo 2 (Magnus McCune) showed the Unstructured Data Pattern: 

    • Low-power camera directly connected to ESP32 microcontroller running face detection model

    • Another ESP32 acted as MQTT client, publishing face detection inferences to broker

    • Promoted privacy by keeping image data contained on device

  • The standard and open-source code are available on GitHub for community contribution.

  • Q&A covered topics like MQTT's actor model support, using MQTT on single nodes/devices, and source code availability.

Transcript

Introduction

Kudzai Manditereza: 00:00:04.501 Welcome, everyone, to our webinar today on MQTT Standards for Integrating Edge AI Systems. So this is a result of the work that HiveMQ conducted on an open-source project with our partners from Oshkosh Corporation and Modzy. And then as you can see also on the call with balena, we have also been joined by our partners here who are also contributing to the project. And it's also open to the rest of the community for everyone who also wants to contribute to this project. So we're going to speak more about what that is. So let's jump right into it. Okay. So first of all, I'll do a quick introduction to our panelists today. So my name is Kudzai Manditereza. I'm a developer advocate here at HiveMQ. I'm in the community and advocacy team. So I do all the evangelism around smart manufacturing.

Kudzai Manditereza: 00:01:05.288 And joining me on the call is Magnus McCune, who's a senior IoT solutions architect here at HiveMQ. He's part of the professional services team. And then we've also got on the call, Marc Pous, who is the IoT giant, both literally and metaphorically, and is a developer advocate at balena. So thank you, gentlemen, for joining me today. Okay. So we're going to start with our agenda. So I'm going to kick off things here with an introduction to the standard, talk about what the standard is composed of, talk about the decisions that we took and why we took those decisions to really build up those standards. And then after I'm done, we're going to hand over to Magnus and Marc. So actually today, we've got two demos that you're going to witness.

MQTT Standards for Integrating Edge AI Systems

Kudzai Manditereza: 00:02:05.788 One of them implementing the fully integrated pattern, and the other one, the unstructured data pattern. And then after that, we're going to jump into Q&A. Okay. So just a little bit about HiveMQ. So HiveMQ is a company based out of Munich here in Germany. And we're providers of the enterprise MQTT platform. So as you can see there on the screen, we've got quite a broad range of product offerings. Starting at the edge, we've got HiveMQ Edge, which is an open-source connectivity software that we released last year, which also embeds an MQTT broker. And we also provide some MQTT client libraries, including Java, C#, and the likes. And then at the center here, you can see this is where we've got the core of our platform, which includes an MQTT broker platform. And we've got also Data Hub, which is a policy engine for MQTT data on the broker.

Kudzai Manditereza: 00:03:10.183 And we've also got an extension ecosystem. And we do provide SDKs for building your own extensions. And we also provide connectors in the form of extensions to all the different IT systems, streaming database, and also some security services. And of course, we've got a fully managed cloud service, and we've also got options for self-managed as well. So yeah, we serve customers in different industries, which include connected car and mobility industry, manufacturing, and industrial automation. And we also have got customers in the transportation and logistics and connected assets industries. Okay. So let's jump into the topic for today.

Why Standards for MQTT in Edge AI Are Critical

Interoperability and Flexibility

Kudzai Manditereza: 00:04:02.649 So why did we decide to put together this “MQTT Standards for Integrating Edge AI Systems”? So I mean, the big motivation really here was about interoperability and flexibility. So if you are in the industrial IoT space or in the IoT space in general, you'd know that interoperability is a big topic, right, because currently, we still have systems where data exists in different silos and systems are not able to communicate with each other. So that is kind of the biggest motivation, to provide that interoperability and flexibility between different Edge AI systems for all those that are implementing Edge AI systems into their factory floors or different industrial systems. There is still that lack of compatibility between different data formats, which, obviously, is not an ideal case. And then also the idea of cost savings.

Cost Savings and Accelerated Deployment

Kudzai Manditereza: 00:05:04.080 Again, this also speaks to the fact that, currently, you still need to sort of go into that research and development mode to kind of come up with different data formats that you're going to use for exchanging information within your edge AI ecosystem, which, obviously, is time-consuming. So the idea was to really kind of lay down this framework that you can build on top of and save all this time.

Scalability and Community Engagement

Kudzai Manditereza: 00:05:32.075: And then also, there is the issue of scalability. So obviously, as you build up and integrate more of Edge AI components into your infrastructure, you want to be able to plug things in and out, right, in a way that is scalable. So you don't want to, obviously, build some custom connectors for each Edge AI component that you bring into your infrastructure. So that was also a big motivation for that. And also, just put it out there to encourage the community to be innovative around how we're really integrating MQTT into edge AI systems.

Kudzai Manditereza: 00:06:12.763 So again, as I mentioned, this is actually hosted on the Modzy GitHub repo. And for all those who are interested in actually digging more into this standard, please you could check that. And also, I think Erin will also provide a link to the white paper where we broke this down into exactly what it is composed of. So feel free to check that out. Just to give a shout-out to the contributing team on this standard. So we've got Seth Clark with Modzy. And we've got Bradley, Nathan, both from Modzy. I think Nathan has moved and a couple of them have moved as well. So this is out of date. And we also got Joshua and Brian from Oshkosh Corporation. So thanks to the team.

Common Patterns for Edge AI on MQTT

Kudzai Manditereza: 00:07:02.317 Okay. So now let's jump into the meat of the standard. So when we first took a look at this ecosystem, what we did is we identified three different patterns on how you could possibly integrate MQTT into your Edge AI infrastructure. So the first pattern that we identified was a situation whereby an AI or ML model is using MQTT both for input data and also for output data. And we call that the Fully-Integrated Pattern. So we're going to go into detail about all these different patterns shortly. And then the other pattern that we identified was a situation where you've got a model that is consuming input data using other protocols. So if you're in the industrial domain, it could be OPC Modbus or even just a direct connection to a sensor. Right? And then we call this Unstructured Data Pattern.

The Fully-Integrated Pattern

Kudzai Manditereza: 00:08:09.875 And then the other pattern was a situation whereby the AI model is actually outputting its information using MQTT, using other systems and inputting this information using MQTT. And then we kind of called this the Ambassador Pattern. And then, obviously, for this other quadrant, we don't have anything because it's not really MQTT involved then. So let's take a quick look at the first pattern, which is the Fully-Integrated Pattern. So basically what you're looking at here is a diagram that shows this scenario where, for example, you've got some video, you've got some devices or systems that are generating video, audio, and machine data. And then these are data inputs into an MQTT broker.

Kudzai Manditereza: 00:09:07.105 And then you've got an edge device sitting somewhere in your factory that is running perhaps one, two, or more models, or even just one. So these are the models that are subscribing to the broker to ingest all of this raw input from all these different sensors and video equipment, and then doing the processing. And then after doing the processing, it will publish the inference back and insights back into the MQTT broker. And then these could be also consumed by, say, some predictive maintenance applications that, again, will be able to publish back whatever insights they actually have there. And you've also got some feedback control systems that maybe are interested in finding out what the predictions are or what insights are that have been generated by that AI model. So they also subscribe into the MQTT infrastructure.

Kudzai Manditereza: 00:10:07.878 And then also maybe if it's just a matter of just doing some time series classification, you also ingest data from the MQTT broker. Now, these situations where, say, your video or audio input is such that you have got no control over the payload or the MQTT topic that is being generated from that system — so what we did is we also made provisions for a data transformation layer that will consume that information, contextualize it and normalize it, and then republish it back into the MQTT broker for the AI and ML model to consume and then perform this inference. So this is the first pattern that we identified. And Marc is actually going to be doing a demonstration on that, on exactly what that looks like in practice.

The Unstructured Data Pattern

Kudzai Manditereza: 00:11:06.139 Now, the next pattern is the Unstructured Data Pattern. So this is a scenario where you've got, say, an edge device, again, that is running your AI and ML models, that is sitting, say, on the shop floor, and it is connected directly to all the different machines and video and audio inputs using some protocols that are not MQTT. So again, this could be industrial protocols or whatever protocol that they're using to communicate directly with the Edge AI device where the modules sit. So the modules will consume this information through these different protocols and then do the data processing insight generation and then publish that information into the MQTT broker. And then on the other end, this is where then different systems will subscribe to the MQTT broker to either perform some predictions or just also implement some feedback kind of mechanisms or some classifications.

The Ambassador Pattern

Kudzai Manditereza: 00:12:09.248 So this is the Unstructured Data Pattern symbol and straightforward. And Magnus is actually going to demonstrate a situation of what that looks like in practice. And then again, as I mentioned, we've also got the Ambassador Pattern. So basically here, it was a realization that you'd have a situation whereby, say, you've got systems that are modern IoT or systems that are capable of generating MQTT data, like for video and audio or machine data, publish that into an MQTT broker. But for some reason, maybe for latency issues, you want to be able to, say, pass this information on to a feedback control mechanism through a Fieldbus protocol or dedicated protocol.

Kudzai Manditereza: 00:13:01.417 So this is where we identify an Ambassador Pattern. Or for reasons that the systems don't really understand MQTT, so the edge device needs to communicate with these different systems using their custom protocols. So this is the Ambassador Pattern, but we're not going to demonstrate this today. Okay. So now let's jump into the guidelines that we actually came up with for integrating these different systems into MQTT. So the first issue that we identify that needs to be solved or that needs to kind of be standardized on is this idea of creating different namespaces. So essentially, this is different buckets or pockets of information that make sense to put in an Edge AI ecosystem that is utilizing MQTT.

Topic Namespaces for Edge AI

Raw Data Namespace

Kudzai Manditereza: 00:14:00.517 So first of all, we identified that for the raw sensor data, so for example, data that is coming out straight from a video generating system or, say, maybe from a machine that is communicating using a custom protocol. This is the raw information — this is the raw data that is coming into the MQTT network. So it needs to be stored in a namespace that makes sense for that information to leave so that all the other participating systems know where to find that information. So we call this Raw Data Namespace.

Inference Namespace

Kudzai Manditereza: 00:14:33.807 Now, once a system consumes this raw data, this raw video input, and then it performs an inference. So say, maybe it's a video feed where it identifies how many people are actually on the frame, so it produces an inference to say, "We've got 10 people actually working in a particular cell." So we need to have a namespace for holding this inference data. Right? So this is where we identified and code this, the Inference Namespace.

Insight Namespace

Kudzai Manditereza: 00:15:06.082 And then we also identified a third namespace, which is the Insight Namespace. So again, using the example of an inference where we're saying we've identified 5 or so people working in a work cell. So if the situation demands that we only have 1 worker in a cell at any particular time, so the insight there would be that we've got more than enough people that are working in that cell. So maybe you could raise an alarm or whatever the case may be. So these are the different namespaces that we identified to hold all the different types of information that are going to be exchanged between the different components of an Edge AI ecosystem. Now, let's take a look at an example of what a raw data namespace looks like. So first of all, as you can see, we've got a topic structure, which is a basic MQTT topic structure.

Example of Topic Namespaces

Kudzai Manditereza: 00:16:07.437 But what we have within that structure is the name of the actual system that is generating that data. So in this case, we've got a milling machine. So this could be maybe a video camera or whatever the case may be. And then we've got a raw sort of namespace after that, after the particular machine that is generating that data. So in this case, this could be information that is like your air temperature, process information or talk, or the tool where this is the information that could be coming out of that milling machine that is being published directly into the raw data namespace. Now, this is what an Inference Namespace looks like. Again, this is your basic MQTT topic, but what you see here is that we've got the name of the machine that is actually generating the data inputs.

Kudzai Manditereza: 00:17:06.103 And then after that, we've actually got the model name, right, the model that is performing the inference. And then we've got the model version because we identified the need to be able to actually have some visioning aspect to it. So if you're going to be updating your models, you want to know which version of the model actually produced a specific prediction. And then after that, we've got the actual Inference Namespace. So again, as an example here, this could be the likelihood of a failure of that milling machine and the likelihood of a non-failure with all the different confidence scores. So as you'll see later on how that looks like in the actual payload structure. And then we've got the Insight Namespace example here. So you'll notice that this looks like the Inference Namespace. The only difference is that this actually goes into the Insight Namespace.

Kudzai Manditereza: 00:18:06.655 So we've got the machine that is producing the data inputs. We've got the name of the model and the vision of the model that is performing that inference. And then we've got the actual insight that could be coming not necessarily from the AI system itself, it could be from some other application that is consuming these AI inferences and then actually outputting some business insights. Right? So this insight, for example, could be maintenance is actually required for this particular machine. So maybe a score for predicted failure was high enough to actually consider the maintenance requirement for this particular piece of equipment. So what we did is actually we sort of recognized that within MQTT, we obviously have got MQTT, plain MQTT, but we do have MQTT Sparkplug.

Flat MQTT Topic Structure

Kudzai Manditereza: 00:19:06.809 So we also found Sparkplug quite attractive in this scenario because we generally are working in the edge domain. So this is not so much about enterprise integration. So within the edge domain, MQTT Sparkplug can really prove to be powerful as far as all the different discovery mechanisms and device management that it offers. So what we did is we actually created standards for both flat MQTT and MQTT Sparkplug. So what you're looking at right now is a standardized topic structure for plain MQTT. Right? So the first element that you see there is the [Customized MQTT topic structure]. So this is basically where we actually identified to say this could literally be anything. But the recommendation obviously is the ISA-95 hierarchy, which is your enterprise, site, area, line, and work cell. Right?

Kudzai Manditereza: 00:20:10.249 But obviously understanding that different organizations have got different ways of structuring or really organizing their equipment. So this, really, we left it to be a customized MQTT element of the topic structure. But what we did standardize on here is that after that, we actually need to put the Edge_DeviceID. So this is the device that actually holds your AI and machine learning models. And then after that, you've got your model_name, model_version, and inference. I think we've already spoken about that. And then when it comes to the MQTT Sparkplug topic structure, so again, so we've got the namespace, which is the Sparkplug B root namespace. And then for the group_id, this is also where we recommended to sort of put a concatenation of your ISA-95 format.

MQTT Sparkplug Topic Structure

Kudzai Manditereza: 00:21:06.795 So again, this could be where you also put your custom organization hierarchy. And then for the message_type, Edge AI data, this data is going to be published using the D data message type for MQTT Sparkplug. But again, we also made provisions to support different data types, like for birth messages, death messages, and the like. And then for the edge_node_id, this is the name of the actual device that is actually performing the — that is generating all these data inputs. So it could be the actual IP camera or the actual machine on Edge server. And then the device_id is then the prediction, the identity of the AI model that is actually generating the inference. Right? So you will notice that, obviously, here, we're missing the model_version and the model_ name and the identity of that actual model.

Kudzai Manditereza: 00:22:07.545 So that is actually taken care of within the payload structure of Sparkplug itself, as you will see later on. This is basically where we're fitting the rest of the metrics, like the model identifiers. Okay. Just to kind of give you a quick snapshot of what that looks like within a Unified Namespace. So this is basically where you would have your organizational structure. And then you've got your Edge AI hardware, which holds your actual AI or ML models. And then from there you've got your Raw Data Namespace and the different Inference and Insight Namespaces. Okay. So let's have a look at the guidelines that we developed for actually structuring the payload. So basically, here we actually realized that we need to cater for two different types of payload structure: one for predictions and the other one for the insights.

Guidelines for MQTT Payload Structure Design

Kudzai Manditereza: 00:23:09.366 So the predictions one is basically for trying to determine what a future event is going to — the future state is going to be like. And then the structured insight is just to kind of give a classification whether it's an object detection or whatever the case may be. And then when it comes to the actual formatting, so we made provisions for three types of formatting. So we've got Protobuf, JSON, and also XML. But the standard itself doesn't provide examples for XML encoding. We do provide examples for Protobuf and JSON. Right? But primarily because, again, we're working in an Edge AI environment where sometimes this is really about making predictions and then being able to actually perform some corrective measures in real time. Right?

Kudzai Manditereza: 00:24:07.236 So which is really, if you're in a situation where you're controlling a robot or you're feeding these predictions back to your robotic arm, you really want to have really some high performance on the wire. So this is where we sort of considered Protobuf to be the primary way of formatting this, but also made provisions for JSON in situations where, obviously, you don't have the capability to communicate or exchange messages within a Protobuf ecosystem as it were. So this is basically what a Flat MQTT Payload Template looks like. So as you can see here, what we have is the identifier. So this represents the unique ID of an individual inference. So this could be the payload of an inference. And then we've got the model, which is an object that provides important metadata about the model that is used to generate this output.

Kudzai Manditereza: 00:25:08.608 So for example, we've got the identifier, and then we've got the version, and then we've also got the machine failure prediction. So what you will notice here is that this metadata about the model, we've got it on the actual topic itself, but we also find it necessary to also embed it within the payload structure again. And then we've also got the text, which is an object that provides metadata related to the input that is fed into the model that generated that inference. So this includes the topic where the input data came from as well as the size of the data that has been processed by the model and also the other information. And then we've also got the result type. So the resultTypebasically is a unique name for a specific model inference format. So as you will see, we actually did standardize on quite a number of different types of class predictions. Right?

Kudzai Manditereza: 00:26:06.251 So I think on this demo, we're going to see the object detection prediction format. So I mean, the only requirement — so here, we standardized on these names. Right? So the only requirement here — you could come up with a specification for a custom format, like for your own niche application. The only requirement here is that this value needs to match the key of the JSON string included in the results object. So as you can see here, we've got the resultType that is called classPredictions. Again, within the result here, we've got classPredictions. So if you decide to use a custom result type, you just need to make sure that it is also what you are calling within the result object itself. Then again, obviously, we've got the result which contains the model outputs. And then you've got the explanation object which contains the — if, for some reason, your AI model needs to kind of provide explanations on how it came to a certain conclusion.

Kudzai Manditereza: 00:27:08.105 And then this just gives you an example of what a Sparkplug payload, DBIRTH Payload looks like. So I mean, we're not going to go into details about Sparkplug, but generally Sparkplug has got this automatic discovery whereby if a component joins the network, it must announce all the metrics that it is going to publish. So here, you'll notice that we actually have metrics that hold the different model identifier, model versions, as I alluded to earlier. So this is basically how we're including these identifiers within the Sparkplug payload. So obviously, we've got so much more metrics within this particular payload, but we're only able to show this. And then, obviously, if you've got a Sparkplug host that is consuming all of this Sparkplug information — this is typically how it would look like within the Sparkplug host application that is consuming this different information, where you're automatically able to collect this information and be able to actually reflect it in the hierarchy with which it was produced.

Kudzai Manditereza: 00:28:11.272 So this is the kind of automatic discovery that I spoke about earlier. That is a big part of Sparkplug. And then this is a DData Payload. So this is the actual data. So initially, with the [inaudible]. Now this is the DData Payload, which is the actual data that is being pushed by the Sparkplug, the inference over a Sparkplug network. So again, it contains the different elements that you saw on the flat MQTT topic. So here, we've got the source topic, the byte size of the inference, and then we've also got the result. So what you'll notice here is that on the actual result, for compatibility between Sparkplug and flat MQTT, you still put that specific result object within the value of the Sparkplug result.

Kudzai Manditereza: 00:29:00.709 So this kind of really makes sure that there's even interoperability or compatibility between either MQTT Sparkplug or flat MQTT. So I will quickly run through the different model results that we have on the specification. But again, if you want to find out more of this, we could go to the standard itself. So as mentioned, we've got this classPredictions, which basically is a class name and a score of that particular prediction. And then we've got Multi-Classification. So obviously, this is a situation whereby model outputs are grouped into more than two distinct classes. And then we've also got Object Detection, which basically also includes the className, the score, and also information about the boundingBox. I think Marc has got a cool visual that is going to show us about how this looks like in practice, where you're actually putting all these bounding boxes. And we've also got the Named Entity Recognition, which is used to identify unique entities, such as names and locations, within a large corpus of text. Okay. So I think it's now time to jump into the demo. So what I'll do is I'll hand it over to Marc.

Demo 1 – Fully Integrated Pattern for Integrating Edge AI into MQTT

Marc Pous: 00:30:27.599 Thank you, Kudzai. It was a great presentation. Let me see. I think I'm sharing the right screen. Can you see? Yeah. All right. So let's start with this Fully- Integrated Pattern with Edge AI and balena. But first of all, before I start, I would like to give some context of what is balena. Okay? Maybe you don't know what is balena, but I don't know if in the chat, you can say if anyone used this software before. Anyone use it? Let's see. Here we see the first winner. Yeah. Some people replying. Okay. So this is balenaEtcher. Actually, it's one of the most successful software that we develop at balena. And if you use our balenaEtcher, we know how we do software. So what we like, basically, it's to reduce friction on whatever you want to do. But this is not our main software. Our main software, it basically is something that we call balena. Okay?

Marc Pous: 00:31:26.849 It's a secure container-based technology stack that enables you to develop, deploy, manage, and scale your IoT devices that you have around there. Because imagine that, yeah, you have to manage multiple IoT devices. So how do you update the software running on your IoT devices? How do you update the operating system running on your IoT devices? So how do you access remotely to these devices that are deployed in the middle of the wild? So this is what we do at balena. We enable our customers and developers just to allow them to update software remotely from IoT devices, basically based on our Linux operating system, and update the host as well. Okay? So given this context, basically — oops. Okay. This is balenaCloud, which I’m going to use in the demo.

Marc Pous: 00:32:24.825 Basically, balena — it's a stack of technologies which is based on balenaOS, which is the open-source operating system. We only run applications made with Docker containers. So anything that works on balena works on any Linux with Docker. And then you can use balenaCloud, which is our premium tool, where you can connect up to 10 devices for free. That is what I'm going to show. Or you can run OpenBalena, which is open source as well, and you can run it on your device. Okay. So once everyone has the context, I'm going to show you my demo of the Fully-Integrated Pattern. And actually, it was kind of serendipity that Kudzai and me were speaking and actually sharing that we were working on the same. And luckily, I was doing the Fully-Integrated Pattern. And imagine a factory shop where, for example, you can have cameras to capture conveyor belts’ activity. Okay? In our case here, we have Raspberry Pi’s with cameras connected.

Marc Pous: 00:33:26.818 And what I was, yeah, thinking for a project is, okay — how they should send the data to an Edge AI device that, with a powerful GPU, can actually get a real-time quality assurance of the product that is manufactured. So basically, what I was testing when I was speaking with Kudzai was, okay — I have these GPUs running on balena device with an NVIDIA device. But how can I make that the images from my cameras go from the Raspberry Pi to the NVIDIA devices? So basically, what I did was to add another balena device, which is an Intel device, an x86, which managed the MQTT broker and all types of different data transformation layers and other types of things.

Marc Pous: 00:34:24.745 Actually, on the top of that, I'm going to do this demo. But for example, something that we are doing with some companies is that we can have a kiosk on the same end of the line of the manufacturing line running on another Raspberry Pi and just a display that shows the people who is working there at the end of the line, if that product that is in front of them is correct or not, or if they need to make any change. And what we are doing is that all the data goes through MQTT with a Unified Namespace. Because yeah, in this case, we are just sending an image from this Raspberry Pi with a camera to this NVIDIA Jetson device that I'm going to show you in a second. But the thing is that, yeah — what happens in the future when another service will want to actually have access to these images, right? The Raspberry Pi, taking in real time. Now, what if another service wants to store these images or whatever? So if you use MQTT, so you have this flexibility of having any service subscribing to these images.

Marc Pous: 00:35:28.403 Okay. So let's get into the demo. By the way, for the demo, I'm going to use Raspberry Pi 4 with a USB camera, but just a demo proposed, but you can have a more professional camera connected. I'm going to use a Seed Studio J4012 reComputer, which is based on an NVIDIA Jetson Nano Orin 16 gigabytes. I'm not sure about the [inaudible] or whatever the GPUs have. But yeah, you can check. You can Google for the Seeed Studio J4012 reComputer. And for the MQTT broker and the MING stack that you're going to see, we are going to use an Intel NUC i7. But yeah, any x86 device can make the same work there. On this demo as well, I want to show how with balena, you can manage multiple fleets of IoT devices and manage them from balenaCloud.

Marc Pous: 00:36:24.609 Everything that I'm going to show you — it's open source. It's on my GitHub repository. Okay? Maybe Erin shares this later. We can share it on the chat. It's not a problem, or you will have it on the slides. So if you go to my repo, you will see this balenaCam project with [inaudible] MQTT. So you see a Pi camera publishing data over MQTT. You will find the MING stack for the Intel NUC, where you have the MQTT broker running on HiveMQ. And finally, the Seeed Studio project running PyTorch as a machine learning system that basically, yeah, subscribes to the broker, gets the images, and checks what it is. Just for the demo proposals, as we are not right now in an industry environment, I'm just using this PyTorch, TensorRT, YOLO v3, yeah, machine learning vision system that just recognizes objects that are in my room, just for the demo purpose.

Marc Pous: 00:37:26.809 So let me show more. It's still, yeah, checking the slides that jump. So basically, this is, yeah, as I mentioned, balenaCloud. I created a fleet. I call it balenaCam MQTT. I deployed this GitHub project that I shared with you. And basically, what it's doing is it's taking pictures and sending the picture — publishing the picture, sorry, to the mqtt_broker. Basically, something interesting, the MQTT broker IP address, in this case, you see it's in my local network. You can define it as a device variable from balenaCloud, the mqtt_port, and then the mqtt_topic. So we are going to publish — maybe I'm not following exactly what I was explaining before, but yeah, it's pretty similar. I'm going to publish on the topic finished by camera raw data. Okay? This goes to the x86 device where the HiveMQ MQTT broker is running in what we call the MING stack.

Marc Pous: 00:38:29.569 MING stands for MQTT, InfluxDB, Node-RED, and Grafana. If we have time, we can get into the Node-RED and see what's going on there. Or we can just subscribe with MQTT.fx or MQTT Explorer and see the data from the broker. And finally, we have the — if we are on the fully integrated pattern, actually, we have the Edge AI device that gets subscribed to this broker and gets images in real time. So basically, what I'm going to do from balenaCloud is I'm going to get into the terminal. And getting into the terminal, it's like SSH to the PyTorch service, which is running here. And basically what I'm going to do is I'm going to get into the source code. Sorry that this is not automatically running, but I wanted as well to show how this works. And I'm getting into the samples and YOLO.

Marc Pous: 00:39:27.115 And there, I just created, okay, Python. I hope the demo gods are with us, with I and Magnus. So yeah, let's — yeah. So now I'm going to call my Python script, which is basically on my GitHub repository. But what it does is subscribe to the topic of the local MQTT broker, and basically starts inferencing what's on the picture. So let's see. So okay, we detected a person and a sports ball that they have. Yeah. I don't know if you can see that they have a sports ball here. And yeah, handbags. And yeah, I put some objects that can be easily detected, Okay? And basically, this is getting — so yeah, the JSON that you see here, it's the inference that we are publishing back to the MQTT broker.

Marc Pous: 00:40:27.513 And we are also publishing back the image with the boundingBoxes of the object detected. Okay. Okay. So basically, if we go to the MQTT Explorer so we can get into the Unified Namespace — you see my screen, right? You see the MQTT Explorer? Yes? Okay. Thank you. So yeah, I'm also having a Modbus sensor sending temperature as following the raw data. And then the camera, we saw the raw image that's being sent. Actually, yeah, I didn't explain the Raspberry Pi. I just published a Base64 string of the picture taken. And basically, yeah, the inference, it's following the standard with a Base64 image. And I have the TensorRT with a version that it's running, and basically the inference with very similar JSON that we see in SSH, yeah, running on the device, on the Edge AI device.

Marc Pous: 00:41:37.690 And I think, basically, this is it, the demo that I wanted to show. I think we have the — oops. So yeah, I had some screenshots. So for example, yeah, this is an example of what we get. Okay. So this is a picture I took yesterday. And then with the boundingBox, I didn't put any objects that I'm using today. But just as an example of what we have. And this is basically, yeah, the Fully-Integrated Pattern demo where we use the MQTT broker as a communication backbone just to enable developers to have a more flexible environment in the industry and using Edge AI devices behind the MQTT broker.

Kudzai Manditereza: 00:42:29.271 Thank you, Marc.

Magnus McCune: 00:42:30.245 [inaudible].

Marc Pous: 00:42:31.953 Welcome.

Demo 2 – Unstructured Data Pattern for Integrating Edge AI into MQTT

Magnus McCune: 00:42:32.454 I guess I'll take over from here. Go ahead and share my screen momentarily. Share screen. There we are. And let's move you folks off to the side. Excellent. Once again, my name is Magnus. I'm a senior IoT solutions architect here at HiveMQ. Today, I'm going to demonstrate the Unstructured Data Pattern within the standard. It's actually a very similar demo to Marc's, just taking a very different approach. And so we'll go through and look at how those two differ. Just to get a thumbs up — my screen is sharing correctly? Thank you, Marc. And hopefully as I go through, I might be able to answer some of the questions that popped up in chat, and we'll also have Q&A as we go through. So just a refresher on the Unstructured Data Pattern. The idea here is that we have some sort of input, perhaps a video or imaging device. I saw the question a number of times of whether video is typically transmitted over MQTT. And the simple answer, as was demonstrated by Marc, is typically that video is not transmitted over MQTT. There isn't really a structure for that.

Magnus McCune: 00:43:35.802 However, images Base64-encoded or otherwise are frequently transmitted. And so in this case, we're using a video device, capturing stills, and then transmitting those via MQTT. So hopefully, that answers the question that was in the Q&A. I'm doing something quite similar here. So instead of transmitting that still over MQTT and passing it directly through the broker, I'm actually passing it directly to an Edge AI device, which I'll go and cover in a moment, and then passing just the inferences across to MQTT. There's a number of reasons you might want to do this. And in this case, the sort of use case I'm going to demonstrate is a privacy use case. For whatever reason, perhaps IP protection or perhaps privacy protection, we don't want the — we don't want our image data being transmitted across MQTT or being persisted in any way.

Magnus McCune: 00:44:28.424 And so in this example, I'm going to be using a video capture device that goes directly into an edge device, is never put across the wire, and then only the inferences are being transmitted across to the MQTT broker. And we'll demonstrate that here in just a second, actually right now. So for my demo, similar to what Marc showed off, my devices here are a lot simpler and I would say purpose-built. So what I'm using is a device called a PersonSensor from a wonderful company called Useful Sensors. And it's effectively an image sensor, a camera glued directly to a microcontroller. In this case, an ESP32-S3. And there's an AI model running on that microcontroller — that all it does is detect whether there's a person or, more specifically, a face within frame, identifies that face, creates a boundingBox, and then passes that information using a serial protocol or I2C across to, in my case, another ESP32 device that is then acting as an MQTT client, transmitting across to our broker, HiveMQ.

Magnus McCune: 00:45:36.877 Today for my demo, I'm running a local broker. In this case, it's running in Kubernetes using our HiveMQ Kubernetes Operator. And from there, applications or analytics services or any of our other systems could subscribe to those inferences and make business decisions or whatever might be needed based on that. And then if that was maybe perhaps within one factory, I might also then transmit either those inferences directly or possibly just the insights up to a more sort of centralized environment, like an enterprise Unified Namespace that might be running in HiveMQ Cloud or somewhere else. There was a quick question in chat, and I think we'll cover it more, about: Can you run HiveMQ on a single node or a simple device? Absolutely. In these factory scenarios, Marc's example of running on a NUC today is perfect for that. We're often operating HiveMQ on these simpler devices, sometimes in development environments, but also sometimes in production workloads.

Magnus McCune: 00:46:34.593 One thing that I wanted to highlight here is Marc brought several teraflops of processing power to our conversation today. I am bringing a 240 megahertz dual core processor. So this is an incredibly purpose-built processor that we're going to be using today that really all it's intended to do is it has this one AI model, it's intended to detect faces. It does effectively nothing else, and really that sort of keeps this a low-cost solution. While my demo is using a low-cost solution, there are enterprise versions of this that are intended to run very specific sort of detections. So Marc was describing QA and that sort of thing. So this would be an instance where you train a use- case-specific AI model or ML model and then load just that model onto your sort of purpose-built hardware and are just using that for inference in a very narrow case.

Magnus McCune: 00:47:33.023 So the Unstructured Data Pattern in this case is really designed to meet a very specific need, whereas the solution that Marc was demonstrating is sort of more general purpose. So you can use it for — you can use that AI processor, that Edge AI service, for a wide variety of tasks. In this case, the system is being used for exactly one task and doing nothing else. So I'll go through that very quickly. Yeah. Now, for the demo portion. Oh, and very quickly the payload. So similar to what Marc was demonstrating here. So we're detecting faces in this case. So I'm actually counting the number of faces. I've deviated ever so slightly from our design pattern here, adding an additional detection outside of the standard, but I'll show why in just a moment. And then once again, similar to what Marc was demonstrating with a boundingBox score and this facing value that I'll explain in just a moment.

Magnus McCune: 00:48:31.320 And so these are being stored in the Inference Namespace, which is sort of just our JSON payload following the format defined in the spec. And then an insight that you might generate from this is we might have a system that says — we might have some business logic that says, "Only one worker is allowed in this area." So you might sort of compare the number of current faces visible in frame versus what's expected from the environment. So I'll flip over to MQTT.fx now, and we can have a look at what this looks like. And hopefully, the demo gods are with us. If not, I've got some stable slides. So I actually have my device right here today. It's a very lovely 3D-printed prototype with our boards within it and a little battery. So I'm going to go ahead and plug the battery in, and we'll see a couple things actually just before I do that. So we're connected to my local Kubernetes cluster, my dev cluster here, and I have my topic. My first one here that I'm going to show, this isn't part of the spec, but I think it's always important to show this — is that we're subscribing to some sort of state engine.

Magnus McCune: 00:49:33.266 We're describing the current state of our sensor, and we can see that it's indicating as offline. So our sensor is currently offline. I'll go ahead and plug it in. And if the demo gods are with me, that'll switch to online. And let's see. Yeah. So we can see that now my sensor is showing — is online. I'm not receiving any data on this topic. This is just a sort of state topic in flat MQTT, but let's actually go and subscribe to the next topic, which would be our inferences. So similar to what Marc was showing, I'm located here in lovely Halifax, Nova Scotia. I have our demo environment and then Halifax and then sort of just a pretty typical ISA-95 Unified Namespace of area, line, cell, our specific device, which I'm calling a person detector, our specific model, our specific ML model, which I'm calling face detections, the version 001. And then this is an inference — this is inferenced within this.

Magnus McCune: 00:50:31.752 So I'll go ahead and subscribe to that. And we can see we'll start getting data pretty much immediately, and we can turn off auto scroll here just so that we can go and have a look at what some of this data is. So very likely one of the first ones here will be — oh, it's continually finding my face. Okay. So I'll actually cover the camera so that it can't find my face. Bear with me for a moment. I'll cover the camera and we can see — yeah, so when I have the camera covered, we're seeing detections is an empty array. So we're not seeing any detections. And then that facesDetected additional value that I've thrown in there that could perhaps be compared to the desired number of faces is showing a zero. I uncover the camera, and I get a second's worth of flapping there where it's not sure if that's a face or not. And then I'm back to detecting my face. I have a boundingBox. I also have this lovely value that I quite like here called facing, which is just indicating whether I'm looking at the camera or away from it. That's part of the sensor. It's built into this ML model.

Magnus McCune: 00:51:32.829 Once again, I want to reiterate, this is running on a very, very low power 240 megahertz, a processor that is incredibly basic. So this is quite impressive for me and sort of shows the power of a well-tuned model. That covers, I think, the demo portion of what I wanted to indicate. Just for the sake of it, I will unsubscribe from our inference, and we'll see that we drop state pretty quickly. So when I go back to state, we can see that I disconnect. And this is just using the Last Will and Testament feature within MQTT. We'll see within whatever my time to leave is. We'll get a disconnection here at some point in the very near future. Hopefully, it doesn't make a liar out of me. It just might. Oh, there we are. So we can now see my device is once more offline as part of my state engine. Okay. I'm going to flip back to our slides. I don't think I have too much else I want to show, but I'll go through it nonetheless.

Magnus McCune: 00:52:35.183 Oh, just a quick close-up photo here. So this is the C6 ESP32 that does have Wi-Fi. And then just behind it, we have our PersonSensor, which, once again, has no connectivity to the outside world. It's impossible to get the image sensor data. It has zero connectivity to the outside world. The image data retains entirely within this tiny microcontroller, which is great. And then this is just my backup slides. And over to Q&A.

Kudzai Manditereza: 00:53:03.886 Cool. Thank you, Marc. Thank you, Magnus. For such lovely demos. It's a good thing demo gods were with us today, so.

Magnus McCune: 00:53:14.464 Absolutely.

Q&A

Kudzai Manditereza: 00:53:14.760 That's nice. Okay. So I think we've got quite a lot of questions here. So I'll just quickly go through these questions and see which ones we could answer here on the call. So I guess there's one question from Annie. I think you've already answered that Magnus. Is it possible to get data input of video via MQTT or is there any middleware in between? Do you want to expand on your answer?

Magnus McCune: 00:53:39.420 Realistically, I think we covered it. So we've seen situations in which you might send the stream of — for example, if you're doing streaming video, you might use MQTT to send metadata to a different system to say, "Hey, here's an RTSP URL that you might want to subscribe to. But certainly transmitting video via MQTT is not something that's typically done — or sorry, not something that's doable. More commonly, you'd be transmitting stills from a video footage, unless you have anything to expand on that Kudzai.

Kudzai Manditereza: 00:54:11.502 No. I think that's pretty much covered well. So I'll also just jump on to another question here quickly for you, Marc. I don't know if you want to answer this on the call or not, but Miroslav is asking: What is the budget for the balenaCloud?

Marc Pous: 00:54:28.286 Yeah, sure. Yeah. The pricing — it's public on our pricing page on balena.io. So you can check the pricing. But it also depends on the volume of devices that you have connected. So with larger volumes of — we manage large fleets of IoT devices from our customers. So depending on the size of your fleet, it can go to $1 per month, for example.

Kudzai Manditereza: 00:54:58.584 Okay. Cool. Thank you.

Magnus McCune: 00:55:00.607 I was interested in Dave Douglas's question, if we wanted to tackle that. Perhaps several different ones of us could answer pieces of that. But that seemed like a worthwhile one to tackle, if you were open to it, Kudzai?

Kudzai Manditereza: 00:55:12.531 Oh, yeah, sure. Absolutely. I think we can go into that one. So let's kind of jump into the first question. So Dave Douglas is asking: How suitable is MQTT for supporting the actor model?

Magnus McCune: 00:55:24.600 Yeah. So the actor model is sort of this idea that the most basic unit of a component of software is an actor. And I think the Pub/Sub model that exists within sort of event-driven architecture and Pub/Sub model that exists within MQTT is actually perfect for supporting the actor model. So you can find some really great projects out there that specifically talk about the integration — sorry, the use of the actor model within MQTT and event-driven systems. And there's some great blog posts and that sort of thing out there that I think are worth reading.

Kudzai Manditereza: 00:56:02.242 Okay. Cool. You want to maybe just go through the other — okay, so I think we've got: Can a single node or simple non-cloud network run MQTT?

Magnus McCune: 00:56:12.882 Yeah. As Marc demonstrated, a single x86 device can absolutely run an MQTT broker. Our HiveMQ Edge service, which is also an MQTT broker, frequently — sorry, is intended to run on single-node devices on-prem. And then of course, also the ability to scale to 40-plus nodes and incredible levels of performance. So yeah, the whole spectrum is definitely possible with MQTT.

Marc Pous: 00:56:42.638 Yeah. I don't know about HiveMQ, but also to compliment, you can use the Bridge Extension from the single node, right, and just republish forward all the data to a cloud HiveMQ. Right?

Magnus McCune: 00:56:58.985 Yep, absolutely.

Kudzai Manditereza: 00:56:59.872 Yeah. And then we've got another question. I think this one is for you, Marc. So for Demo 2, will the source code be shared as well?

Magnus McCune: 00:57:08.511 Oh, Demo 2 is me, I think.

Kudzai Manditereza: 00:57:10.019 Oh, Demo 2 is you. Okay. Yeah.

Magnus McCune: 00:57:11.579 Yeah. So yeah, my source code will be shared. I answered one of the questions that was asking that, but yes, my source code will be available at some point in the very near future. I'll look to get that out before Erin is ready to send things out. I just need to clean it up because I was making changes this morning.

Kudzai Manditereza: 00:57:31.187 Okay. Yeah. I think, well, Erin, I think we're running out of time here. So I'll just take one more question and then I'll hand it over to you. So we've got Amat who's asking: Which programming language are you using, and how many data you gather for this investigation? So I suppose he's referring to the specification, the standard. So basically this is meant to be programming language agnostic. So we're only defining the payload structure and the topic structure. So you can pretty much use any language here. So I guess this answers your question. Erin, you want to take it over from here?

Closing Words

Erin Musselman: 00:58:10.581 Yes. Thank you all for a great presentation, discussion, and demo. That was awesome. For those of you who are still on the call, if we did not get to your question, we'll do our best to get back to you. And then otherwise, look out for an email in your inbox before the end of the week with a link to the recording of the webinar in case you'd like to revisit any of the content. And otherwise, thanks again for joining. And always reach out to our team if you have any questions or if we can do anything to help you. Thanks. And thanks to Marc for joining us. Have a great one. Thanks again.

Magnus McCune: 00:58:38.361 Thanks, folks.

Kudzai Manditereza: 00:58:39.048 Thank you.

Kudzai Manditereza

Kudzai is a tech influencer and electronic engineer based in Germany. As a Sr. Industry Solutions Advocate at HiveMQ, he helps developers and architects adopt MQTT and HiveMQ for their IIoT projects. Kudzai runs a popular YouTube channel focused on IIoT and Smart Manufacturing technologies and he has been recognized as one of the Top 100 global influencers talking about Industry 4.0 online.

  • Kudzai Manditereza on LinkedIn
  • Contact Kudzai Manditereza via e-mail

Marc Pous

Marc Pous is an IoT Giant & Developer Advocate based in Barcelona, Spain. He is currently the Developer Advocate at balena.io. He was previously co-founder of the IoT platform thethings.iO startup. He has accumulated over 15 years of experience connecting things to the Internet.

  •  Marc Pous on LinkedIn

Magnus McCune

Magnus is a Principal Architect at HiveMQ. He is a passionate technologist with a proven background solving complex business and technical challenges through the design, implementation and operationalization of cloud and edge technologies. His expertise extends to network, cloud, & infrastructure architecture, cloud-native solutions design and large-scale automation projects.

  • Magnus McCune on LinkedIn

Related content:

HiveMQ logo
Review HiveMQ on G2