The DDS Glossary defines commonly occurring keywords and phrases used throughout the Object Management Group’s Data Distribution Services (OMG DDS). As well as some Vortex specific vernacular, familiarising yourself with this list will improve your comprehension of all Vortex Documentation.
Vortex DDS Glossary
Terms and Definitions
A data-centric environment allows you to have a communication mechanism that is custom-tailored to your distributed application’s specific requirements. Distributed application developers can concentrate on the operation of their specific application—without worrying about how they are going to communicate with the other applications in the environment.
DDS is a publish and subscribe service. Data values (Samples) are transferred through the system for conceptual “Data Objects”. The “publication” (the association of a Publisher and a Data Writer) send Samples to one or more “subscription” (the association of a Data Reader and a Subscriber).
Domain Participant Factory
A singleton factory; the main entry point to DDS.
A domain participant is an entity that represents a DDS application’s participation in a domain. It serves as factory, container, and manager for the DDS entities.
A communication context, which provides a virtual environment, encapsulating different concerns and thereby optimizing communications. DDS applications send and receive data within a domain, which provides a virtual communication environment for participants having the same domain id. This environment also isolates participants associated with different domains, i.e., only participants within the same domain can communicate, which is useful for isolating and optimizing communication within a community that shares common interests.
A sequence of logical “namespaces” for topics. The default setting is an empty sequence, indicating the default Partition
The most basic description of data that is to be published and/or subscribed to. A topic connects a data writer with a data reader, i.e., communication does not occur unless the topic published by a data writer matches a topic subscribed to by a data reader. Communication via topics is anonymous and transparent, i.e., publishers and subscribers need not be concerned with how topics are created or who is writing/reading them since the DDS DCPS middleware manages these issues. In programming terms, a Topic is a Class and each instance is a Sample. Topic instances are associated with a key (similar to a packet ID), defined by IDL and a set of Quality of Service parameters.
The data values associated with a Topic, passed between applications are known as DDS Samples. A sample represents an atom of data information as returned by a Data Reader’s read/take operations. It consists of two parts; SampleInfo and the Data itself. The Data part is the data as produced by the Publisher. The SampleInfo contains additional information related to the data provided by the DDS such as a unique key and its associated QoS policies.
Quality of Service (QoS) Policies
A rich set of characteristics that define the behaviour of the DDS systems (such as reliability, liveliness, durability, etc.). QoS Policies control the flow of the data through the system. The Topic, Data Reader, Data Writer, Publisher and Subscriber all have QoS polices. The QoS policies of Publisher, DataWriter, and Topic control the data on the sending side. QoS policies of Subscriber, Data Reader, and Topic control the data on the receiving side. These must be of a compatible type for successful communication, i.e. a Publisher/DW with a BEST_EFFORT QoS cannot send samples for a Topic with a RELIABLE QoS as it could degrade the Topic. However, a RELIABLE Publisher/DW can send samples for a BEST_EFFORT Topic.
The system components are called “Entities” because they all inherit from the Entity class. Each Entity has specialised QoS policies. An Entity may have a Listener, a call back interface for notifications about changes in the Entity’s state or, a wait interface (using WaitSets) for detecting changes in the Entity’s state.
An entity created by a Domain Participant to manage a group of Data Readers. In order to subscribe to a Topic the Subscriber must be in the same Domain and Partition and have a compatible set of QoS policies associated to it.
An entity created by a Domain Participant to manage a group of Data Writers. In order to associate with a Topic and publish samples of said Topic, the Publisher must be in the same Domain and Partition and have a compatible set of QoS polices.
An entity attached to a Subscriber, used to subscribe to a Topic, providing type-safe operations to read/receive data. Each Data Reader can be associated to only one Topic and therefore exactly one data type. This Topic must be created before the Data Reader. A data reader can obtain its subscribed data via two approaches:
- Listener-based approach, provides an asynchronous mechanism to obtain data via call-backs in a separate thread that does not block the main application.
- WaitSet-based Approach – A synchronous approach, blocks the current thread until one or more conditions attached to the WaitSet are met, at which point a list of active_conditions are returned.
There are several ways samples can be read using a data reader, this includes reading with conditions and some buffer management, described in greater detail in your respective language reference guide.
An entity attached to a Publisher bound to publish samples from only one Topic and therefore exactly one data type, providing type-safe operations to write/send data. The Topic must be created before the Data Writer.
Provides a generic mechanism for the middleware to notify an application of status changes in the Entity to which it is attached
Allows application threads to block and wait until one or more of the attached Conditions has a trigger value of TRUE, or until a specified timeout occurs.
An object attached to a WaitSet which allows a thread to block until one or more of the attached condition objects evaluates to true or until the timeout occurs. Each Condition has a trigger_value that can be true or false and is set by the Data Distribution Service. Conditions can be of type:
- GuardCondition – Application controlled, unblocks a WaitSet manually by triggering the GuardCondition set_trigger_value(TRUE).
- Status Condition – provides a generic mechanism for the application to be informed about relevant communication status conditions from Entity objects that have status attributes, access is provided to the application by the get_statuscontition operation. The available status’ depend on the Entity and are described in your relevant language reference guide.
- ReadCondition – allows an application to specify the data samples it is interested in by means of their lifecycle states. The WaitSet triggers as long as data is available that matches the selected SampleState, ViewState and
- QueryCondition – allows an application to specify the data samples it is interested in by means of an SQL-expression.
Responsible for maintaining historical data between services and providing historical data to late joining applications. Specifies if Samples should outlive their Data Writers for late joiners. The longer a sample lives the greater the overhead passed to the DDS becomes as it moves from dropping samples, to keeping them in shared memory, to writing them to hard disk. Provided variants include:
- Volatile – No need to keep Samples for late joining data readers.
- Transient Local – Data instance availability for late joining data reader is tied to the DataWriter availability.
- Transient – Data sample availability outlives the data writer.
- Persistent – Data sample availability outlives system restarts.
Single Process Architecture
This deployment allows the DDS applications and OpenSplice administration to be contained together within one single operating system process. This ‘standalone’ single process deployment option is most useful in environments where shared memory is unavailable or undesirable.
Shared Memory Architecture – (also known as “federated”)
In the ‘federated’ shared memory architecture data is physically present only once on any machine but smart administration still provides each subscriber with his own private view on this data. Both the DDS applications and OpenSplice administration interface directly with the shared memory which is created by the OpenSplice daemon on start up.
Responsible for creating and initialising the database which is used by the administration to manage the DDS data.
When communication endpoints are located on different computing nodes or on different single processes, the data produced using the local Domain Service must be communicated to the remote Domain Services and the other way around. The Networking Service provides a bridge between the local Domain Service and a network interface. Multiple Networking Services can exist next to each other; each serving one or more physical network interfaces. The Networking Service is responsible for forwarding data to the network and for receiving data from the network. There are two implementations of the networking service:
- Native Networking Service – The optimal implementation of DDS networking for OpenSplice DDS and is both highly scalable and configurable.
- DDSI – The purpose and scope of the “Data-Distribution Service Interoperability Wire Protocol” is to ensure that applications based on different vendors’ implementations of DDS can interoperate. The protocol was standardised by the OMG in 2008, and was designed to meet the specific requirements of data-distribution systems.
- DDSI2 – OpenSplice implementation of the Data-Distribution Service Interoperability Wire Protocol. Its features include performance and QoS, fault tolerance, plug and play connectivity, configurability and scalability.
- DDSI2E – Extended version of the DDSI2 networking service, giving extra features for Network Partitions, Security, Bandwidth limiting and Traffic Scheduling.
Process by which DDS determines system entities. The mechanism by which this is done differs between DDSI and OpenSplice Native RT Networking. Either networking service has its own set of dependencies for this process to work correctly.
Significant parameter for periodic and aperiodic real-time critical communications. This represents the maximum separation between two topic updates. The Data Writer indicates that the application commits to write a new value at least once every deadline period. The Data Readers are notified by the DDS engine when the Deadline QoS contract is violated.
Latency Budget QoS
Significant for real-time communications. It specifies the maximum acceptable delivery delay from the Data Writer to the Data Reader.
Specifies whether to attempt to deliver all samples (RELIABLE) or if it’s acceptable not to retry any failed data propagation (BEST_EFFORT). Similar to TCP and UDP networking protocols.
Specifies whether multiple Data Writers can write the same instance of data and if so, how these modifications should be handled. Multiple writers are allowed to update the same instance and updates are available to the reader (Shared). Each instance can only be owned by one Data Writer, but the owner of an instance can change dynamically due to Liveliness changes (Exclusive).
Specifies the value of ‘strength’ used to arbitrate among Data Writers that attempt to modify the same data instance.
Offers the following settings to detect failures on the publishing node
- AUTOMATIC – Infrastructure managed (default), leases are renewed automatically by the service. Only process failures can be detected this way
- MANUAL_BY_PARTICIPANT – Application managed, the entire Domain Participant renews its lease when one of its contained Entities asserts its liveliness, either explicitly or implicitly.
- MANUAL_BY_TOPIC – Application managed, each Data Writer is responsible for renewing its own lease, either implicitly or explicitly.
Specifies the number of samples kept by the middleware engine. DDS will only attempt to keep the most recent ‘depth’ of samples (KEEP_LAST) or ALL samples (KEEP_ALL) of each instance of data identified by its key.
Integer value used to attach a priority to data passed through the DDS. This can be used to implement priority bands that ensure high priority data is more valuable within the DDS than lower priority data.