This article contains some memory related, submitted Vortex OpenSplice questions that may prove useful to others.
How big is the Shared Memory segment (typically)? Is there a process attached to it?
OpenSplice utilizes a size-configurable shared-data segment for holding all data and metadata. A minimum size of 4 MB is recommended since DDS defines a set of built-in-topics that must be maintained, the default size is 10MB. This size is furthermore heavily dependent on:
- The number of topics, especially the active range of key-values that define how many instances of each topic are in the system.
- The history settings, especially the number of samples per instance that have to be maintained.
Topics are physically present only once on a node, regardless of the number of applications attached to the shared memory. Smart administration provides each participant with his own ‘view’ of this information so that it is perceived as a private datacache. This architecture greatly improves scalability and allows for many applications on one machine as is typical in combat-system environments.
How is DDS mapped to a relational database table?
The OpenSplice DbmsConnect Module is a pluggable service of OpenSplice that provides a seamless integration of the real-time DDS and the non-/near-real-time enterprise DBMS domains. It complements the advanced distributed information storage features of the OpenSplice Persistence Module.
The DbmsConnect service can bridge data from the DDS domain to the DBMS (Database Management System) domain and vice versa. In DDS, data is represented by topics, while in DBMS data is represented by tables. With DbmsConnect, a mapping between a topic and a table can be defined.
Because not all topic-table mappings have to be defined explicitly (DbmsConnect can do matching when the names are the same), namespaces can be defined. A namespace specifies or limits the content of interest and allows for easy configuration of all mappings falling (or defined in) a namespace. The content of interest for bridging data from DDS to DBMS consists of a partition and topic name expression. When bridging data from DBMS to DDS the content of interest consists of a table-name expression.
A mapping thus defines the relationship of a table in DBMS with a topic in DDS and can be used to explicitly map a topic and table with different names, or define settings for a specific mapping only. More information can be found in the Vortex OpenSplice Deployment Guide.
Is it possible to deploy OpenSplice on a node with only volatile memory storage?
Yes. The only shortfall this brings is that this particular node will be unable to store data in a Persistent state, i.e. data will not outlive a restart on this node. However as long as there is at least one node in the system with non-volatile memory deployed in the system this will be used to store Persistent data from the entire system. The Transient persistency option will still be available on the volatile node as this doesn’t rely on non-volatile memory. All functionality offered by the full set of supported profiles are available on diskless nodes too.
Can Valgrind be used for memory analysis in OpenSplice?
While using shared-memory architecture with OpenSplice DDS, we have found that Valgrind consistently reports information that is incorrect as it cannot deal with the intrinsics of our shared memory implementation. It has caused a great deal of wasted time when trying to improve the product and we now utilise more application based testing / explicit inspection methods for testing memory. With OpenSplice DDS v6 in heap memory mode, tools such as Valgrind should work well. With this architecture, any memory leaks should be traceable efficiently via tooling.
Memory consumption continues to increase until a crash.
Typically this behaviour is caused by misconfiguration of a DataReader in your application. For example using QoS: Reliability – RELIABLE, Durability – VOLATILE, History – KEEP_ALL, In a scenario where the DataReader does not continually take this data from the DataReader cache will cause it to continually grow. However with durability set to VOLATILE, this will only occur while the DataReaders in question are active.
What is the relationship between shared memory and the database size setting in “ospl.xml” in the OpenSplice DDS environment?
The database size setting in ospl.xml is exactly the size of the allocated shared memory segment requested by OpenSplice from the Operating System
DataReader read/take allocate “loaned” memory
The take/read operations on a DataReader usually allocate loaned memory to the application. So this is memory in OpenSplice services and not the application directly. This memory must be returned or there will be memory leaks. Each API deals with this differently so please check the documentation. For instance traditional C++ requires a specific return_loan operation. Most other APIs use a standard reference count and therefore the loaned memory will be de-allocated when the samples go out of scope. Please be aware of this when passing these samples around the application.
In some APIs it is also possible to allocate the memory in the application and “fill” this from OpenSplice. Again, check the API to see the optional read/take methods so you can choose the appropriate method.