This article contains some user submitted Vortex OpenSplice general configuration questions that may prove useful to others.
Vortex OpenSplice support a considerable amount of configuration parameters, the easiest way to modify this file is to use the Vortex Configurator Tool, this shows all available configurations along with detailed explanations of their functions. See the Vortex OpenSplice Configuration Editor for more information.
How do I select the DBMS table to be used?
DDS to DBMS Example configuration in the OpenSplice XML config file:
2 <DbmsConnectService name=”dbmsconnect”>
4 <NameSpace partition=”*” topic=”Dbms*”
5 dsn=”DSN” usr=”USR” pwd=”PWD” odbc=”ODBC”>
6 <Mapping topic=”DbmsTopic” table=”DbmsTable”/>
On line 3 a DdsToDbms element is specified in order to configure data bridging from DDS to DBMS. On line 4, a NameSpace is defined that has interest in all topics starting with “Dbms” in all partitions. Both the partition- and topic-expression make use of the *-wildcard (matching any sequence of characters). These wildcards match both topics described in the scenario, but will possibly match more. If the mapping should be explicitly limited to both topics, the topic-expression can be changed to DbmsTopic,DbmsDdsTopic. The DbmsConnect service will implicitly map all matching topics to an equally named table in the DBMS. While this is exactly what we want for the DbmsDdsTopic, the database application expects the data from the DbmsTopic topic to be mapped to table DbmsTable. This is explicitly configured in the Mapping on line 6. If the tables already exist and the table-definition matches the topic definition, the service will use that table. If a table does not exist, it will be created by the service. If a table exists, but doesn’t match the topic definition, the mapping fails.
Is there a maximum size a single topic-sample can be?
There is no DDS-imposed limit on the size of a topic. Limits are imposed by the available memory to hold both the administration of a sample and its type, as well as the memory required for storage of the ‘actual’ data (in shared-memory or – in case of the ‘single-process’ deployment of V6.x – on process heap). This article describes some of the issues faced when modifying one of the OpenSplice examples to send a sample of 10MB.
How can I check the availability of data on a Data Reader?
DDS has a number of mechanisms that allow you to monitor the availability of new data in your DataReader: One of more of these can be used in your application at the same time.
- Polling – Check periodically if there is data available. Do this by calling the ‘take’ operation on the DataReader and seeing if it returns any data.
- Listeners – Listeners can be attached to the DataReader and they make sure your application is notified of events that it might be interested in. A middleware thread takes care of the notification, the thread calls the method in the Listener that matches the event that occurred. The method in the Listener is implemented by the application and a listener can be attached when the DataReader is created or at a later stage. This method requires a bit more work on the application side but it prevents the application doing unnecessary work.
- Waitset – You can create relevant conditions and attach them to a Waitset. If the wait method of the Waitset is called, the calling thread is blocked until one or more conditions attached to the Waitset becomes true. By inspecting the conditions attached to the Waitset the application can determine what events have occurred. The waiting is done in an application thread which allows full control of the thread by the application.
Can I control service behaviour after a failure?
Each service reports its liveliness regularly using the DDS administration. If the service fails to do so, the Domain service will assume the service has become non-responsive. The FailureAction element determines what action is taken by the DomainService in case this happens.
The following actions are available:
- skip: Ignore the non-responsiveness and continue.
- kill: End the service process by force.
- restart: End the service process by force and restart it.
- systemhalt: End all OpenSplice services including the Domain service (for the current DDS Domain on this computing node).
For example in the ospl.xml configuration file:
<Domain> <Name>ospl_sp_ddsi</Name> <Id>0</Id> <SingleProcess>true</SingleProcess> <Description>Stand-alone 'single-process' deployment and standard DDSI networking.</Description> <Service name="ddsi2"> <Command>ddsi2</Command> <FailureAction>restart</FailureAction> </Service> <Service name="durability"> <Command>durability</Command> </Service> <Service name="cmsoap"> <Command>cmsoap</Command> </Service> </Domain>
How do I configure OpenSplice to use single process or shared memory architecture?
OpenSplice is configured using an XML file which is used to specify the deployment model of Vortex OpenSplice. The OSPL_URI environment variable points to the configuration file that OpenSplice is using.
The default value refers to the ospl.xml file located in the etc/config directory of the Vortex OpenSplice installation. Single Process By default OpenSplice uses the Single Process memory architecture. There are a number of different examples of configuration files that use the single process architecture also supplied in the etc/config directory. These files all have ‘sp’ in their names, for example ospl_sp_ddsi.xml or ospl_sp_nativeRT.xml.
If you are using Single Process architecture then you also need a networking service to be available for two DDS applications to communicate. A single process deployment is enabled when the Domain section of the XML configuration contains
If you want to switch from the default single process memory architecture and use shared memory you need to use a configuration file which sets up the shared memory configuration. There are a number of examples of these in the etc/config directory of the OpenSplice Enterprise installation. These all have ‘shmem’ in their name, for example: ospl_shmem_no_network.xml, ospl_shmem_ddsi.xml, or ospl_shmem_nativeRT.xml The OSPL_URI environment variable needs to point to one of these files. A networking service is not necessarily required as DDS applications running on the same machine can communicate by using the shared memory. A shared memory deployment is enabled when the Domain section of the XML configuration does not contain <SingleProcess> TRUE attribute but does contain:
How can I configure two Shared Memory domains in parallel?
When the OpenSplice daemon starts in shared memory mode it obtains an address from the configuration file (or uses a default) which determines an address in the process’ virtual address space where the shared memory segment is mapped.
If the daemon is not able to start it is possible that the address (either default or configured) that it needs to use is already being used on that node. The problem can also be seen if a large database size has been specified causing the memory segment boundary to be crossed. In this case a new shared memory segment address should be specified.
This can be done by adding the option:
to the configuration xml file into the section
This address must be the same for each process communicating within a domain. It is useful to use a Memory Mapper to help select the value for the memory address. Here is the section from the OpenSplice Deployment Guide (v6.8.1) with some more information:
This element specifies the start address where the nodal shared administration is mapped into the virtual memory for each process that attaches to the current Domain. The possible values are platform dependent. Change this value if the default address is already in use, for example by another Domain Service or another product.
Default values per platform:
- 0x20000000 (Linux2.6 on x86)
- 00×140000000 (Linux2.6 on x86_64)
- 00×40000000 (Windows on x86)
- 00×40000000 (Windows on x86_64)
- 00xA0000000 (Solaris on SPARC)
- 00xA0000000 (AIX5.3 on POWER5+)
- 00×0 (VxWorks 5.5.1 on PowerPC604)
- 00×60000000 (VxWorks 6.x on PowerPC604)
- 00×20000000 (Integrity on mvme5100)
- Full path: //OpenSplice/Domain/Database/Address
- Format: string
- Default value: 0x40000000
- Valid values: 0x0 / –
- Occurrences min-max: 0-1
How do I set up the Cmd Prompt / Terminal environment to run OpenSplice?
1. Go to the <install_dir>/<E>/<platform> directory, where <E> is HDE
or RTS and <platform> is, for example, x86.linux2.6.
For Posix – $source ./release.com
For Windows – >release.bat
The environment should then be configured. If you are building examples on windows using the batch files included, remember to source the visual studio command prompt tool vcvarsall.bat.
How do I select which Network Interface OpenSplice uses?
Every networking service is bound to only one network interface card (NIC). By default the networking server tries to use the first broadcast enabled interface. This can be changed by adding the following section to the OpenSplice configuration xml file.
This element specifies which network interface card should be used. The card can be uniquely identified by its corresponding IP address or by its symbolic name (e.g. eth0). If the value “first available” is entered here, the OpenSplice middleware will try to look up an interface that has the required capabilities.
How do I enable DDSI2 Network Logging?
Does OpenSplice support Partition Mapping?
Yes, an example of a valid configuration is:
<NetworkPartition Address=”126.96.36.199″ Connected=”true” Name=”theNetworkPartition” />
<PartitionMapping DCPSPartitionTopic=”MulticastPartition.*” NetworkPartition=”theNetworkPartition” />
<IgnoredPartition DCPSPartitionTopic=”LocalPartion.*” />
The partitionTopic expression supports the ‘*’ as a wildcard. A partitionExpression normally consists of a partitionname and a topicname, seperated by a ‘.’ , Either part can optionally be replaced by the ‘*’ wildcard.
Valid partitionTopic expressions look like: