Beyond MQTT: The Fit and Limitations of Other Technologies in a UNS
In the previous post, we explored why an event-driven architecture built with MQTT serves as the ideal foundation for a Unified Namespace (UNS). MQTT’s lightweight, efficiency, and its publish-subscribe model makes it an ideal choice for real-time industrial data communication. However, while MQTT is a crucial enabler of UNS, an MQTT broker alone will not serve as a UNS. A fully functional UNS requires additional technologies to address data persistence, processing, modeling, and interoperability.
We are frequently asked in our client projects whether it would be possible to build a UNS on top of technologies like Kafka, OPC UA, or e.g. Snowflake instead of MQTT. In this post, we aim to differentiate the roles of these technologies and explain why they alone are not ideal for creating a UNS. While each of these tools plays a critical role in modern industrial data architectures, MQTT remains the best fit as the backbone of a UNS due to its event-driven nature, lightweight design, and real-time capabilities.
Eclipse Sparkplug’s Role in the UNS
Eclipse Sparkplug is an open-source specification that attempts to provide MQTT clients a framework to seamlessly integrate data from their applications, sensors, devices, and gateways within an MQTT infrastructure. The aim of the Sparkplug specification is to define an MQTT topic namespace, payload, and session state management that can be applied with specific focus on the manufacturing industry.
Where Sparkplug Shines: It really shines when all of the devices and applications in the ecosystem understand the format. It allows the OT data to be easily contextualized and the state of the OT data nodes to be shared so that the receiving nodes can be informed about it and can make decisions based on that. It is also created for command and control where the remote applications, like SCADA, need to control some machine set points etc.
However, when discussing a centralized Unified Namespace spanning an entire enterprise—from shop-floor devices to cloud platforms—Sparkplug may not always be the ideal single or sole solution.
Why Sparkplug is Not Ideal as the Central UNS Solution
Rigid Hierarchy and Data Modeling
The rigid topic structure eliminates one of the core benefits of the Unified Namespace, the capability to design a semantic topic structure that reflects the organization. While there are common blueprints for how organizations might choose to structure their UNS, the reality is that every organization has its own needs and ANY rigid/prescriptive topic structure will not suffice.
Sparkplug mandates the use of Google Protocol Buffers (Protobuf) to encode payloads, which can be a significant obstacle in a Unified Namespace that needs to accommodate various data formats and protocols beyond OT telemetry.
While Protobuf ensures small, efficient binary payloads—an advantage on constrained networks—it also forces any non-Sparkplug applications or services to implement custom logic for serialization and deserialization of the Sparkplug payload. This creates an extra integration layer when bridging data to systems that expect JSON, XML, or other common enterprise formats.
Device-Centric Information Model
Sparkplug was designed for SCADA Systems to efficiently consume tags from field devices. Therefore all the field device metrics are bundled under one topic, forcing applications that are not SCADA to consume and process a massive amount of data that they may not be interested in. Sparkplug has no mechanism for selective subscription to metrics of interest, and this makes it not suitable as a UNS protocol.
Enforced Strict Workflow
Sparkplug expects birth and death certificates, node/device IDs, and a certain message flow. While this structure benefits shop-floor environments by ensuring deterministic state management, it can be overly prescriptive when moving beyond pure OT use cases. In a UNS that spans multiple domains (IT, data analytics, business applications), not every data stream or service needs device-like state tracking or a strict lifecycle model.
Restrictions on MQTT Features
Sparkplug 3.0 imposes a few MQTT-related restrictions in using features like QoS and retained messages:
QoS > 0 is not allowed for the “official” Sparkplug message types NBIRTH, DBIRTH, DDATA, NDATA, NCMD and DCMD. These messages must be published at QoS 0 with retain = false. If you deviate from QoS 0 for these specific message types, a system is no longer strictly Sparkplug-compliant.
Technically, the spec only mandates QoS 0 for the “official” Sparkplug messages. If custom message types are introduced outside the Sparkplug core (for example, non–Sparkplug-B messages that still happen to use MQTT), higher QoS levels can be chosen—but again, those wouldn’t be “Sparkplug messages” in the strict sense.
Excellent OT Alignment But Limited Flexibility
Sparkplug’s strict definitions (e.g., QoS 0 for core messages, no retained messages, mandatory birth–death flows) ensure predictable, lightweight communication for OT. A UNS that involves IT and business systems, however, needs features like retained messages, dynamic topic structures (can be tailored on the fly to reflect an organization's structure, processes, and data hierarchy), or different QoS levels. Sparkplug can be overly prescriptive for scenarios that do not involve device state or where more flexible data strategies are required.
Limited Adoption Outside of OT
Sparkplug is inherently OT-centric, focusing on real-time telemetry and shop-floor requirements. Outside of the industrial and manufacturing domain, many enterprise systems, SaaS platforms, and IT-centric services do not natively speak Sparkplug.
Even though Sparkplug runs on MQTT, adoption is far from universal among data lakes, modern microservices architectures, or typical enterprise SaaS systems. The need for additional translation or connectors can introduce complexity when building a single UNS for both OT and IT.
Summary: Sparkplug’s Role in OT vs. Enterprise-Wide UNS
Sparkplug excels at standardizing and streamlining OT communications by requiring a tightly controlled message flow and rigid data structures that reliably track device state. However, that rigidity becomes a significant drawback when building an enterprise-wide Unified Namespace.
A UNS typically demands broader flexibility—multiple QoS levels, retained messages for last-known state, and topic patterns that can adapt to various IT, analytics, and business-data requirements. Sparkplug intentionally avoids many of these MQTT features (like retained messages and QoS > 0 for core data) because it focuses on deterministic state management in OT environments.
While Sparkplug remains an excellent choice for shop-floor telemetry, machine-state tracking, and SCADA-focused solutions, it can hinder broader enterprise interoperability by imposing a fixed birth–death lifecycle and prohibiting essential MQTT features. Consequently, for a fully featured, scalable UNS that spans all layers of the organization, “plain” MQTT and its inherent flexibility typically deliver a better foundation than Sparkplug alone.
Kafka and UNS
Kafka is a powerful distributed event streaming platform known for its high throughput and fault-tolerant capabilities, making it an excellent choice for certain types of large-scale data processing and real-time analytics. However, when it comes to serving as the central data hub in a Unified Namespace, Kafka has limitations that make it less ideal for this purpose.
Why Kafka is Not Ideal as a Central Data Hub in a UNS
Lack of Dynamic Topic Creation and Management
In general, Kafka requires topics to be created and managed statically before any messages can be published or consumed. In a UNS, especially one handling millions of devices or data sources, the need for dynamic, hierarchical topic creation is critical.
However the Kafka broker has a setting called auto.create.topics.enable
, which determines whether a topic can be created automatically when a producer or consumer tries to send data or subscribe to a non-existent topic.
BUT: When a topic is automatically created, it will use the default configuration settings for partition count and replication factor, which may not always suit your specific requirements. It’s often better to manually create topics with the desired configuration (e.g., number of partitions, replication factor) to ensure optimal performance and reliability.
No Built-In Support for Hierarchical Topic Structures
In comparison to MQTT’s hierarchical dynamic topic structure, Kafka uses a flat topic structure, where each topic is isolated and predefined. In a UNS, topics are often organized in hierarchical structures to represent different systems, devices, or regions of the enterprise. Users can simulate hierarchy by using naming conventions, but this is purely cosmetic and doesn’t provide real hierarchical functionality.
This also has a huge impact on topic filtering/wildcard subscriptions as Kafka doesn’t have built-in wildcard topic subscriptions like MQTT does, where you can subscribe to an entire level of topics. In Kafka regular expressions provide some flexibility, but they don’t offer the full power of MQTT-style wildcard subscriptions.
Optimized for High Throughput, Not Primarily for Low Latency
In the context of Kafka, "latency" is the time it takes for a message to be published by a producer and then delivered to a consumer. Throughput, on the other hand, is the rate at which the system can process messages. Kafka is not inherently optimized for low-latency messaging. Instead, it focuses on high throughput, durability, and scalability, which makes it ideal for large-scale event streaming and data processing.
Kafka can achieve relatively low latencies, but this needs some effort in careful tuning and adjusting various configurations, such as batch or buffer size. Balancing latency and throughput is critical in Kafka, as increasing one often means sacrificing the other.
No Built-in QoS for Message Delivery, No Native Retained Messages
Kafka guarantees "at-least-once" or "exactly-once" delivery using transactions. However, Kafka has no built-in Quality of Service (QoS) levels for devices, and no lightweight mechanism for guaranteed message delivery to low-power devices.
Messages are only available as long as retention policy allows. Consumers joining late cannot instantly retrieve the last known value. This requires external key-value stores or compacted topics—a special type of topic that retains only the latest value for each key —for state persistence.
Lack of Built-in Real-Time Push Mechanism
Kafka uses a pull-based mode. Consumers actively request (poll) data from the data source, rather than receiving it automatically in a (real-time) push-based manner. While this is a good choice for large-scale event streaming and batch processing, this is not ideal for UNS requirements as periodic polling adds latency as well as increasing network & CPU usage (not optimized for low-power or constrained devices).
Not Lightweight
Kafka is a complex system with a significant infrastructure footprint, and because of its high overhead, is often managed centrally by IT teams. This can create bottlenecks and limit real-time edge processing. Instead, an effective UNS should allow data to stay at the edge when possible, reducing unnecessary centralization and improving efficiency.
Kafka is not designed to be lightweight, as it requires multiple components like brokers and partitions for scalability and redundancy. Kafka's heavier infrastructure requirements make it less suitable for real-time communication in industrial settings where devices may have limited bandwidth and processing power. This makes Kafka unsuitable for small IoT and Edge deployments.
Kafka clients (Producers and Consumers) require significant processing power, memory, and network bandwidth to handle the high-throughput and complexity of Kafka’s messaging infrastructure. This makes Kafka clients generally not suitable for constrained devices such as IoT sensors, embedded systems, or devices with limited processing power, memory, and bandwidth.
The Kafka client libraries (e.g., kafka-python, Kafka Java client) are complex and designed for enterprise-grade systems with ample resources. These libraries require managing consumer groups, partitioning, offset management, error handling, and acknowledgments, which adds significant overhead that is difficult to manage on constrained devices.
For massive concurrent connections, MQTT brokers are generally more efficient and cost-effective, especially in IoT scenarios. Kafka is more resource-intensive per connection due to its focus on storage, replication, and complex consumer-group coordination. Kafka can handle millions of connections, but it requires much larger clusters and more hardware resources to manage this load effectively.
Summary: Where Kafka Falls Short for Building a UNS
Kafka is an excellent tool for high-throughput, fault-tolerant streaming in big data and event streaming applications, but it is not ideal as the central data hub for a Unified Namespace (UNS). The lack of efficient dynamic topic creation and the complexity of its heavy-weight infrastructure make Kafka less suited for real-time, event-driven communication where millions of devices are constantly publishing and subscribing to dynamic, hierarchical topics.
In contrast, MQTT is specifically designed for low-latency, lightweight, and real-time communication, with dynamic topic creation and hierarchical topic management, making it a far better choice for managing the central data hub of a UNS.
Having said that, Kafka definitely has its place, and in some designs, is a powerful data sink and an important building block in hybrid solutions for big data processing or stream analytics in a UNS architecture.
OPC UA and UNS
OPC UA (Open Platform Communications Unified Architecture) is widely used in industrial automation and IIoT (Industrial Internet of Things) systems for communication between devices, systems, and applications. It is known for its interoperability, data modeling, and secure communication across different vendors and systems.
However, when it comes to serving as the central data hub for a Unified Namespace, OPC UA has several limitations that make it less than ideal for the role, especially when compared to technologies like MQTT.
Why OPC UA is Not Ideal as a Central Data Hub in a UNS
Lack of Native Support for Event-Driven Architecture (EDA)
OPC UA is primarily built around a client-server and request-response model, where clients request data from servers or subscribe to specific data points. While OPC UA does support subscriptions for some real-time data transfer, it does not inherently follow the event-driven architecture pattern that is key to a UNS.
To recap: a UNS relies on a publish-subscribe model for event-driven, real-time communication, where data flows continuously from devices to applications with minimal delay. MQTT, with its pub-sub architecture, is optimized for this kind of dynamic, real-time data exchange whereas OPC UA is more suited for structured, on-demand data transfer.
No Native Support for Real-Time Publish-Subscribe Model
OPC UA is not inherently designed around a real-time publish-subscribe model, where data is pushed instantly from publishers to subscribers. In contrast, MQTT excels at push-based messaging, which ensures that data is delivered with low latency as soon as it becomes available, which is crucial in a UNS.
However the OPC UA Specification Part 14 introduces the publish-subscribe communication model and contrasts it with the traditional client-server model in OPC UA. In this chapter MQTT is pretty much recommended to enable real-time data updates that can be sent as they occur, reducing latency and improving responsiveness. By combining OPC UA with MQTT, devices and applications from different vendors can communicate over a standardized data model (OPC UA) with a widely adopted messaging protocol (MQTT). This promotes interoperability and allows systems to connect across different industries. Note: again MQTT plays a pivotal role here.
Limited Scalability for Large Number of Devices
OPC UA’s primary mission is to enable standardized, secure communication between industrial devices (PLCs, sensors, SCADA, HMI systems) and is well-suited for industrial environments with interoperable devices and machines. OPC UA should be recognized for what it is: a powerful protocol within the OT environment.
It is not designed to handle thousands and millions of devices in real-time across an entire enterprise. Organizations try to stretch OPC UA beyond its core use cases, hoping to achieve data convergence across every level of the business. But OPC UA doesn’t inherently handle the complexities of enterprise integration.
A UNS must be able to scale to manage millions of connections and data streams from machines, sensors, databases, and cloud systems. OPC UA tends to scale well in smaller or industrial environments but struggles to achieve the same level of horizontal scalability as MQTT.
Complex and Resource-Intensive
OPC UA is a heavyweight protocol that provides a lot of features such as complex data modeling, interoperability, and security. However, these features come at the cost of increased complexity and resource consumption.
Hierarchical Data Structures vs. Real-Time, Dynamic Topics
OPC UA is designed to manage hierarchical data models, which is beneficial for organizing complex industrial systems. However, in a UNS, you need the ability to create and manage millions of dynamic, hierarchical topics in real time.
While OPC UA excels at static hierarchical data structures, it does not offer the dynamic topic creation and real-time, flexible routing needed for a UNS.
OPC UA is object-oriented towards physical systems on the shop-floor; hence, it is only good for exchanging data from level 0 (Production) up to level 1+2 (Manufacturing Control) of the ISA-95 model. Yet, the UNS requires modeling of systems up to higher-level IT systems (ISA-95 level 0 - level4) such as scheduling, quality, and order management systems which benefit from the flexibility of MQTT data structures.
Complexity in Handling IoT and Edge Devices
OPC UA is a robust protocol, but its complexity makes it harder to manage IIoT and edge devices where lightweight communication is essential. In a UNS, edge computing is critical to offload processing from the central system by performing pre-processing and filtering closer to the devices.
MQTT, on the other hand, is well-suited for edge computing scenarios because of its lightweight nature, which makes it more efficient in IoT environments where limited resources are available. OPC UA requires more resources to implement, which can strain smaller devices
Interoperability and Legacy Systems
One of the major strengths of OPC UA is its ability to integrate legacy systems (e.g., SCADA, PLCs, industrial control systems). This makes OPC UA valuable as part of a hybrid solution for certain industrial automation use cases within a UNS.
However, OPC UA alone lacks the scalability, dynamic topic management, and real-time communication required in a UNS, even though it can complement other technologies for specific industrial integrations.
Integration Complexity
OPC UA includes multiple profiles that do not ensure seamless interoperability by default, often necessitating intermediaries, which adds complexity to integration.With MQTT’s broad support across OT and IT tools, this approach accelerates innovation by enabling plug-and-play compatibility with best-in-class tools.
Adding new data points with OPC UA can become complex, particularly in large systems with deeply structured data models or when new devices need specific OPC UA schema adjustments. Integration costs here stem from configuring new nodes in the OPC UA address space, ensuring compatibility with existing data types, and potentially modifying client applications to recognize new data sources. The decoupled, hierarchical topic structure of a UNS enables new data points to be added without the need for schema modifications in a central server, reducing labor and time requirements.
Summary: OPC UA’s Limitations in Building a UNS
OPC UA is a powerful and widely adopted protocol in industrial automation, particularly for integrating legacy systems and ensuring interoperability across various devices and platforms. However, it is not ideal as the central data hub technology for a Unified Namespace (UNS) due to its limitations in handling real-time, event-driven communication, scalability for millions of devices, and dynamic topic management.
While OPC UA can still be used in specific industrial systems within a UNS, it is better suited as a complementary technology rather than the core hub. MQTT remains the superior choice for managing real-time, low-latency communication and dynamic, hierarchical topics, making it the best technology for a central data hub in a UNS designed to assess the real-time status of an entire enterprise across millions of devices.
Data Lake/Data Warehouse and UNS
A data lake and data warehouse platform like Snowflake is excellent for storing, processing, and analyzing large volumes of data, particularly for historical analysis, business intelligence, and machine learning applications. However, it is not suitable as the central data hub in a Unified Namespace for several key reasons. A UNS is designed to provide real-time, dynamic communication across an entire enterprise, while Snowflake and similar data lake platforms are focused on batch processing and data storage rather than real-time event-driven communication.
Why a Data Lake or Data Warehouse (Like Snowflake) is Not Suitable as a Central Data Hub in a UNS
Not Designed for Real-Time, Event-Driven Communication
The publish-subscribe model that a UNS relies on allows for low-latency, real-time updates. In contrast, Snowflake operates on batch processing and query-based models, where data is typically collected, processed, and analyzed at specific intervals, not in real time.
Snowflake is a data warehouse AND data lake platform optimized for large-scale batch processing and analytics, but it is not designed for real-time communication. A UNS requires an event-driven architecture where data is continuously published and consumed by various systems in real time.
High Latency in Data Processing
Snowflake is not built for low-latency, push-based communication. In a UNS, devices, systems, and applications need to publish and receive data instantly to reflect the current state of the business across the entire enterprise. This is essential for machine data and systems like SCADA, or MES, where real-time data is critical for decision-making. Snowflake excels at handling large volumes of data for historical analysis but is not a real-time ready platform.
However, it does offer some features that enable near real-time data processing, which can be sufficient for use cases where low-latency (but not millisecond-level latency) is required:
Snowpipe for Continuous Data Ingestion, which adds the ability to integrate with streaming tools like Kafka or Kinesis (and then trigger events into a UNS)
Streams and Tasks for Change Data Capture, to track changes to tables in near real-time and trigger actions based on these changes (e.g. into a UNS).
No Support for Millions of Dynamic, Hierarchical Topics
Snowflake operates as a static storage platform with structured data stored in tables and schemas, which is unsuitable for the dynamic, event-driven data flow required in a UNS. Snowflake’s architecture is not built to handle dynamic topic creation or real-time updates at the scale and flexibility needed for a UNS.
Snowflake is not optimized for handling the message throughput and device connectivity required in a UNS.
No Integration for Edge Computing
Edge computing plays a critical role in a UNS, where data is processed and filtered closer to the source (i.e., at the edge) before being sent to the central hub. This offloads the central system and reduces bandwidth usage while ensuring that real-time data is processed efficiently.
Snowflake is designed for centralized data storage and does not provide native support for edge computing or real-time processing at the device level.
Summary: Data Lakes and Warehouses are for Analytics, Not for UNS
Snowflake and similar data lake platforms are excellent for storing and analyzing large volumes of historical data but are not suitable as the central data hub in a UNS. A UNS requires real-time, low-latency, event-driven communication, dynamic topic creation, and flexible data routing—all of which Snowflake does not support.
Despite the above, Snowflake can still play a valuable role in a hybrid UNS architecture, where historical data is stored and analyzed.
Wrap-up: Choose the Right Technology for Building a Unified Namespace
Building a Unified Namespace (UNS) requires a technology that supports real-time, event-driven communication, dynamic topic structures, and seamless IT-OT integration. While technologies like MQTT Sparkplug, Kafka, and OPC UA have their strengths, they lack the flexibility and scalability to build an enterprise-level UNS. MQTT remains the best choice for building a UNS.
Read our blog ‘Why MQTT is Critical for Building a Unified Namespace’ to learn more.

Jens Deters
Jens Deters is the Principal Consultant, Office of the CTO at HiveMQ. He has held various roles in IT and telecommunications over the past 22 years: software developer, IT trainer, project manager, product manager, consultant, and branch manager. As a long-time expert in MQTT and IIoT and developer of the popular GUI tool MQTT.fx, he and his team support HiveMQ customers every day in implementing the world's most exciting (I)IoT UseCases at leading brands and enterprises.