This page lists all the changes and fixed bugs in Vortex OpenSplice V6.4.x
Regular releases of Vortex OpenSplice are made available which contain fixed bugs, changes to supported platforms and new features.
There are two types of release, major releases and minor releases. Upgrading Vortex OpenSplice contains more information about the differences between these releases and the impact of upgrading. We advise customers to move to the most recent release in order to take advantage of these changes. This page details all the fixed bugs and changes between different Vortex OpenSplice releases. There is also a page which details the new features that are the different Vortex OpenSplice releases.
There are two different types of changes. Bug fixes and changes that do not affect the API and bug fixes and changes that may affect the API. These are documented in separate tables.
Vortex OpenSplice 6.4.3p6
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3p6
|OSPL-6075 / 14410||Concurrency issue while freeing signal-handler administration
When a process detaches from the user-layer, which occurs automatically when a process terminates, it deinits the signal handler administration. Since it is still possible the process receives a signal at the same time, the signal handler thread may still be running and depending on the administration that is freed by the exit handler. A crash or mutex deadlock is often the result.
Solution: The issue has been resolved by removing the possibility of the administration being freed while still in use.
|OSPL-6184 / 14467|| Missing event on data reader view query.
Queries on data reader views could miss a trigger causing data not to be read.
Solution: The trigger mechanism is corrected and the data reader views are now always correctly updated.
|OSPL-6297 / 14516|| Find topic timing behaviour incorrect.
The find topic method on a DomainParticipant has a timeout parameter that specifies the maximum blocking time to wait for the topic to appear. In certain cases the method would block for 100 ms too long, which can have a noticeable impact on the application when called often on topics that don't yet exist.
Solution: The implementation was changed to never exceed the maximum timeout.
|OSPL-6298 / 14515||Crash when closing OpenSplice rtsm with ctrl+c
The RTSM tool accesses the internal database to get information and statistics. When terminating RTSM with ctrl+c during such a period, then it'll corrupt the database and make the domain stop.
Solution: The signal is now caught and the handler detaches the tool properly from the database before quiting when needed.
|OSPL-6350 / 14531||Deletion of entities while other threads are accessing causes a lot of exceptions
Deletion of entities while other threads are accessing fails as a result of a race condition between unlocking and deleting an Entity.
Solution: Deletion of Entities while other threads are accessing the Entity is delayed until ongoing access has finished.
|TSTTOOL-184 / 14348||New instance not automatically displayed in tester
In Tester, if an application data writer has a QoS of autodispose_unregistered_instances set to false and then unregister_instance is called on a data writer for some instance, a sample reaches matching data readers with the no_writers state and is ignored.
Solution: A new boolean option has been added in the Preferences menu under the Settings tab, called "Ignore not_alive_no_writers samples" and is set to true by default. If the option is set to false, then these specific samples will be displayed in Tester.
|TSTTOOL-192 / 14352||Ospltest gives errors on startup
The script directories are not included in the RTS installers.
Solution: Updated scripts and install directories to fix these errors.
Vortex OpenSplice 6.4.3p5
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3p5
| Crash of DataWriter for multiple generations of one instance
When a DataWriter can't deliver messages because a peer has no resources to accept them, the writer will temporary store the messages in its history and try to deliver the messages at a later point in time. If for one instance multiple messages are 'delayed' for multiple generations of one instance then a crash may occur. This will only occur if a retry is able to deliver the first generation of the instance but not all messages of the newer generations. In this situation the system will detect that the instance of the first generation has ended and disconnect the writer unaware that more generations exist. As a consequence the DataWriter will crash when it tries to deliver the remaining messages.
Solution: he solution to this problem is that the DataWriter actively reconnects when it successfully delivered a 'delayed' Unregister message but was not able to deliver all messages of newer generations.
Vortex OpenSplice 6.4.3p4
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3p4
|Crash of RTSM tool
The RTSM tool fails to attach to shared memory and crashes on a segmentation violation due to a mismatch in the layout of internal data-structures.
Solution: The issue was resolved and the code has been updated to prevent similar issues from occurring in the future
|Unable to read sample that complies to readcondition
Due to a locking problem in the product, a read call could return NO_DATA while actually there is data available
Solution: The issue has been resolved by adding a lock while freeing internal data
|Durability Service Alignment Improvement
When a node becomes master it requests samples from all the fellows. The master will request data for the groups that it knows about. Data for groups that not known to the master are aligned later using a different and potentially slower code path, resulting in less efficiency alignment.
Solution: If the master should request samples from a fellow and it does not have received all groups from this fellow yet, then it will first request the groups of the fellow in order to know as many groups as possible before requesting samples. That will result in more efficient alignment.
|OSPL-6036||Incorrect behaviour of shared DataReaders
In case multiple shared datareaders are created (by setting the share QoS policy), in certain situations the internal administration could be freed while another shared reader still depends on it. This could lead to undefined behavior such as a crash, or even reading data of a different topic if the internal administration was reused for other readers on a busy system.
Solution: The issue was resolved by fixing a bug related to refcounting
Vortex OpenSplice 6.4.3p3
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3p3
|OSPL-5602 / OSPL-5676/|
14112 / 14137
|Alignment of historical data intermittently fails in case multiple master conflicts simultaneously appear.
When multiple durability services are started but communication between them is disabled they all operate in isolation. When suddenly communication between them is enabled multiple master conflicts appears. In an attempt to solve these conflicts alignment of data that was published takes place. When the volume of data that was published in isolation is large, the alignment can become massive. In some cases the alignment was flawed, leading to an incorrect end-state where different nodes have different views on the data that was published.
Solution: The administration to keep track of alignment data has been changed, so that data of the same partition/topic from different durability services are not mixed anymore. Furthermore, the alignment procedure has been adapted which leads to a more efficient alignment scheme involving less alignment data.
|RTNetworking service may crash on an interface status change
Due to a lock being released twice on the detection of a change in the status of a monitored network interface, the RTNetworking service could crash.
Solution: The relevant locking has been revised and the double-unlock has been removed.
|Starting ospl deamon from different user accounts could result in a delay of 10 seconds.
When 2 different users are working on the same node and both are able to attach to the same domain, a user could experience a delay of 10 seconds when the domain was started. Also the process monitor was not working correctly in this use case. This was caused by connecting the same domain, but a different named communication socket.
Solution: The name of the communication socket used by the process monitor is now consistent with the name of the key file in the tmp directory
|Regular reports leading to a large info log
Some info reports that don't convey any particularly interesting information are logged on application start. In a situation where applications are frequently (re)started this could quickly lead to a large info log file.
Solution: The info reports about ignored signals and initialization of the user-clock-module have been removed
|Unable to launch OpenSplice Tuner after installing RTS
The script to launch the Tuner depends on another script, which wasn't included in the RTS but only in HDE installers. Trying to start the tuner triggers an error 'ospljre: command not found'.
Solution: The RTS installer now includes the ospljre script so the tuner can be launched.
|Crash of durability service during termination
In specific circumstances, when durability is terminated while it is resolving the master of a namespace, the service could fail on a mutex lock that was already freed, and would still try to unlock it which resulted in undefined behaviour and potential crash of the service.
Solution: The termination mechanism was revised to free internal administration in the correct order and only unlock the mutex when the lock was successful
|Error log file is created if buildin topics are disabled
When builtin topics are disabled an error log is created with the error: DataReader (name="DCPSParticipantReader") not created: Could not locate topic with name "DCPSParticipant"
Solution: The DCPSParticipantReader is not created anymore if the builtin topics are disabled.
|Lack of reporting when incompatible meta-descriptor is registered.
When type-support is registered by an application for a type that's already known, the declaration needs to match the existing declaration. When this is not the case, the registration fails and an indescriptive error is logged by the serializer.
Solution: A report was added that refers to a declaration mismatch and also mentions the incompatible part, so the user can find and correct the corresponding IDL declaration.
Vortex OpenSplice 6.4.3p2
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3p2
|On VxWorks RTP large delays may occur, even for threads running at the highest priority.
Priority inversion is the phenomenon where a high priority threads runs into a lock that is taken by a low priority thread, the high priority thread will not proceed until the lock is freed by the low priority thread. The remedy to deal with priority inversion is priority inheritance. Priority inheritance temporarily increases the priority of the low priority thread until the lock is freed, thus allowing the high priority thread to proceed. Although VxWorks provides native support for priority inheritance, OpenSplice did not benefit from it. This could cause large delays, even in threads running on the highest priority.
Solution: Priority inheritance for mutexes (which are used to implement locks) is now enabled in OpenSplice for VxWorks RTP. Priority inheritance can be enabled by setting OpenSplice/Domain/PriorityInheritance in the configuration. Note that no priority inheritance for condition variables is supported.
Vortex OpenSplice 6.4.3p1
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3p1
|When the system is terminated while types are being registered a crash can occur.
When a system is started, types that have been specified in the idl specification are being registered. If during the registration of these types the system is terminated a crash may occur. The crash is caused because the registration process is using references to type definitions that may have been deleted already by the termination thread.
Solution: References to type definitions are now properly protected so that it is not possible anymore to delete references to types that are still in use by another thread.
|tale information in ospl artifact file causing ospl to exceed ServiceTerminatePeriod
When the ospl tool is used in blocking mode (-f option) the artefact file is not properly (un)locked and updated under all conditions. Stale administration data could cause the ospl tool to exceed the ServiceTerminatePeriod when stopping a domain or report an incorrect warning when starting the domain.
Solution: The issue was resolved by making sure the artefact file is properly managed in blocking mode.
|OSPL-5770||RnR may cause a warning by spliced about resources on termination
The RnR service doesn't properly clean-up one of the writers it uses. This causes the safety mechanism of spliced to kick in after RnR has terminated.
Solution: The RnR service now properly cleans up the writer.
|OSPL-5772||NetworkingBridge may cause a warning by spliced about resources on termination
The NetworkingBridge doesn't properly inform spliced that it has terminated. This causes the safety mechanism of spliced to kick in. Because the NetworkingBridge actually did clean up its resources, spliced can always successfully clean up after the NetworkingBridge
Solution: The NetworkingBridge now properly informs spliced, so that the clean up routines and the superfluous report don't occur anymore.
Vortex OpenSplice 6.4.3
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.3
|OSPL-5083||CMPartiticipant built-in topic extended with federation and vendor ids
The existing CMParticipant built-in topic needs to be extended with federation and vendor ids.
Solution: The content of the CMParticipant built-in topic has been extended to include a string that may be used as a federation identifier. For Vortex Cafe, each process is considered a federation. For other vendors' products, the federation id is based on our current understanding of the identifiers used by them, and this may change as our understanding grows. Also included is the vendor id code assigned by the OMG to the various vendors for use in the DDSI protocol, thus allowing tooling to show the vendor or use vendor-specific knowledge. The vendor code consists of two unsigned integers separated by a decimal point. (The vendor code for OpenSplice Enterprise is "1.2".)
Ignoring all topics in _BUILT-IN PARTITION_ in DDSI2E breaks all communication
DDSI2E internally relies on a topic in the built-in partition, but failed to note the presence of this topic/partition when ignored. While it is possible to ignore just this topic/partition, in practice, it is most likely to happen when ignoring all topics in this partition. A work-around is to configure topics C* and D* in this partition, as this does not include this particular topic.
Solution: The detection of the presence of this topic/partition has been updated.
|Reference to OMG ISO C++ specification missing in documentation
There is no reference in the ISO C++ PSM documentation to the OMG ISO C++ PSM specification.
Solution: A link to the OMG spec has been added.
|OSPL-5688||Globally unique systemId needs to be generated with more care.
Each federation generates its own id at start-up, which must be unique in the system. Sometimes id's could turn out to be the same causing undefined behaviour.
Solution: Unique system id generation has been improved to prevent duplicates when two copies of opensplice are started simulateously on the same linux or windows node.
|Insufficient checking in java native marshalling routines
When a sample with uninitialized members of type union or enum is written in the Java PSM, the JVM may crash instead of receiving a proper error return code. Note that members are always initialized by default in the code generated by idlpp, but it is possible for the application to assign null to a member after initialization.
Solution: The marshalling routines were changed to return a BAD_PARAMETER code when an uninitialized member is processed.
|An incorrect error is logged when a library fails to load
An incorrect error is logged when a library, such as a report-plugin, fails to load library names. Libraries (i.e. the report plugin), can be entered in the configuration file in a platform agnostic manner. OpenSplice will translate the name and when the library fails to load runs a fall-back mechanism to load the original name. In this process, details on the failure were lost.
Solution: The product has been changed to record a proper error message to the OpenSplice error log when a library fails to load.
|The durability service could crash in case a namespace to a fellow is added for which no aligner exists.
When a durability service receives a namespace for a fellow it adds the namespace for this fellow to the internal administration. Part of this administration is the merge state of the namespace. When no aligner for the namespace is known the merge state is NULL. Due to a bug setting a NULL value for the merge state would lead to a crash.
Solution: The code that deals with setting merge states of namespaces has been changed so that no crash occurs anymore.
|OSPL-5744||Classic Java PSM QosProvider get_participant_qos() may crash
When using the Java QosProvider in combination with get_participant_qos with a non null id the JVM could crash.
Solution: The problem in get_participant_qos is now fixed and the JVM will not crash anymore on a non null id.
|OSPL-5745||autopurge_disposed_samples_delay zero is not instantaneous
When autopurge_disposed_samples_delay is zero, then the purge is not instantaneous. It will be purged only after the monotonic clock has progressed at least one tick.
Solution: This is solved by changing a timing check in the purge handling from 'larger than' to 'equal or larger than'.
Vortex OpenSplice 6.4.2p5
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.2p5
|OSPL-5616||Added support for shared library builds on vxworks RTP
Shared library support required for VxWorks RTP use on Pentium4 and E500V2.
Solution: Added shared library support for VxWorks RTP. Due to symbol table restrictions with the GNU toolchain the ddskernel library has been split into ddskernel and ddskernel2 for PPC Shared libraries.
Vortex OpenSplice 6.4.2p4
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.2p4
|The ospl tool has wrong exit and status codes
Regarding the status of a domain, the ospl tool returns wrong exit codes and depicts wrong status codes when listing domains.
Solution: The ospl tool has been extended with a status file that contains the states of the available domains.
|Issues with RT Networking CPU usage when Record and Replay service is enabled.
An issue in the Record and Replay service in certain circumstances could result in native networking using up all of the cpu resources. When a recording is stopped, Record and Replay stops reading samples matching the record interest expressions, but networking continues to deliver these samples until storage resources are exhausted. When exhausted, networking anticipates on resources becoming available again, and continues attempted delivery at an increased rate, resulting in cpu exhaustion.
Solution: A bug was found and resolved so record interest is properly disposed of by Record and Replay, after which networking stops delivering samples that are never read by Record and Replay.
|The networking service may crash when topics with a name exceeding 64 bits are used.
The networking service uses an internal buffer to store the topic names associated with received messages. Initially this buffer is 64 bits wide. When a topic name larger than 64bit is received the buffer should be increased in size accordingly. However it may occur that not enough memory is allocated which causes memory corruption to occur.
Solution: The issue is fixed by always allocating enough buffer space when a topic name is received which has a size larger than the current buffer size.
|Non-default presentation QoS incorrectly refused by product.
The middleware did not accept a publisher or subscriber QoS on which the presentation was set to instance scope with ordered_access enabled. This caused inter-operability issues with other DDS vendors, while in fact the implementation does by default support ordered access on instance level.
Solution: The restriction has been lifted so that publishers and subscribers can now be created with enabled ordered_access setting, as long as the scope is set to instance.
|Crash of ddsi service when a DataReader with SubscriptionKey QoS policy is used.
Management of builtin topics by the DDSI service contained a bug that could potentially crash the service when a builtin-topic sample is created for a DataReader that has the (OpenSplice-specific) subscription key QoS policy.
Solution: The implementation was fixed to correctly handle the subscription key policy.
|OSPL-5552||OSPL_HOME may not be set correctly when using an archived build
Builds delivered in an archived format would still contain the installer macros to be expanded at install time, without the installer these macros would remain and cause the release.com to set an invalid OSPL_HOME.
Solution: The release.com now attempts to set OSPL_HOME using Bash when not using an installer. For users without Bash, a message will be presented expecting them to manually adapt the release.com with a valid OSPL_HOME.
|Possible crash after exception handler has cleaned up resources.
There was an issue with the exception handler cleaning up used resources which where still in use by the lease and resend managers.
Solution: Before the exception handler frees the used resources, first stop the lease manager and resend manager.
|Bounds checking error on IDL sequences with #pragma stac
When #pragma stac is applied to all members of a struct, it would also be applied to sequences in case the sequence contains strings (or a type that resolves to string). However the code generated by idlpp would not correctly handle this, leading to errors when a sample is published and bounds checking is enabled.
Solution: There is no real performance benefit in applying stac transformation to sequence elements so the pragma is now ignored for struct members of type sequence.
|The durability service could crash when the service is terminating
When the durability service is terminating it tries to clean up its resources. One of these resources is the fellow administration. When a fellow is being removed from the administration because it failed to updated its lease in time while at the same time durability is terminating, it is possible that durability tries to reference a fellow that has already been freed. This leads to a crash.
Solution: References to fellows are properly counted, and the fellow is only freed when no other threads keep a reference to the fellow object.
|Open dynamic loaded libraries with RTLD_NOW.
Dynamic loaded libraries were opened with RTLD_LAZY, which caused problems with external library loading (such as report libraries).
Solution: Dynamic loaded libraries are now opened with RTLD_NOW.
|OSPL-5693||Durability servers that operate in different roles and detect conflicting states for a namespace might handle the conflict wrongly, possibly resulting in a crash of durability.
When different durability servers take responsibility of the same namespace but for different roles, merge policies can be applied to resolve these conflicts. Due to an error in the condition to resolve the conflict it is possible that a different merge policy is applied than specified. Also, when the correct merge policy is applied a crash could occur due to an attempt to access an object that has already been freed.
Solution: The condition to resolve the conflict has been changed s that the correct merge policies are triggered. Also, the crash has been prevented by properly refcounting the object.
Vortex OpenSplice 6.4.2p3
|OSPL-4944||ISO C++ documentation improved
Solution: Resolved problems with API being described, added code samples, added new API descriptions.
|OSPL-5194||VxWorks RTP version now has descriptive thread names
Solution: Added descriptive names to the OpenSplice threads on the VxWorks RTP builds.
|Writing samples from the DCPS Java API can result in an overflow of the internal references table of the JVM.
During a write, a lot of Java object references can be created, depending on the type of a topic. Though the JNI specification allows only 16 references, in practice there were never any issues on Oracle JVM with using more references. Therefore the product did not explicitly delete references in favor of performance benefits during the write call. The PERC JVM however does not allow this relaxation of the JNI spec. overflowing the references table results in memory corruption.
Solution: The Java PSM (JNI layer) was changed, to free unused references so the table cannot overflow.
|he networking service may crash when topics with a name exceeding 64 bits are used.
The networking service uses an internal buffer to store the topic names associated with received messages. Initially this buffer is 64 bits wide. When a topic name larger than 64bit is received the buffer should be increased in size accordingly. However it may occur that not enough memory is allocated which causes memory corruption to occur.
Solution: The issue is fixed by always allocating enough buffer space when a topic name is received which has a size larger than the current buffer size.
Vortex OpenSplice 6.4.2p2
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.2p2
|Unfair claim of ownership by unregister message
Unregister messages could claim ownership of an instance and in combination with the deadline QoS and liveliness lost, this caused data reception 'gaps' when another writer with lower strength had already taken over.
Solution: Unregister messages will not claim ownership.
|When the durability service terminates there is a possibility that the durability service crashes
When the durability service terminates it will clean up all its resources. While doing so there is a possibility that the action queue is already destroyed while another thread still tries to access the action queue. This situation could lead to a crash.
Solution: Before cleaning up most threads are stopped. This prevents that a thread accesses a piece of memory that has been freed by another process. Also, the order to cleanup resources has been changed so that the action queue is destroyed AFTER all threads that may use it are stopped. And finally initialization and deinitialization of objects in the durability service has been improved.
|Difficulty determining if a Record and Replay service is finished replaying samples
In case all samples in a storage were replayed, the Record and Replay service would continue to poll the storage in case new samples were recorded, in order to replay them. This meant the storage remained open and this 'polling state' was not discernible by monitoring the (storage) status topic.
Solution: The behavior was changed to only enter the polling state in case a storage is used for recording as well, at the time the last sample is replayed. In case a storage is not used for recording, all replay-interest is removed, the storage is closed and a corresponding storage-status sample is published.
|Liveliness detection and synchronization problem.
When disconnecting a node, then the liveliness changed is not always triggered on the remaining node when using exclusive ownership. When the liveliness changed is triggered, then the instance state is (often) still 'alive' when 'not alive' is expected.
Solution: Messages of low strength writers in a exclusive ownership setup are not handled. By not ignoring the 'unregister message of such a low strength writer, the liveliness is properly decreased. Also the the liveliness changed is now triggered after the related instance states have been set to 'not alive'.
|With a (default) umask of 0022 different users on the same node interact with each other on a posix system.
When 2 different users are working on the same node with a umask setting of 0022 the splice deamon will attach to the same shared memory segment. This is caused by the key file with user rights set to 666 with the key for the shared memory to attach to.
Solution: If the umask gives only read or write rights to a user/group/others then the key file gets no rights at all for that part. This will result in a key file with user rights 600 on a default umask of 0022.
|Customer code application build problem with 6.4.2
netdb.h system header file clashed with a symbol when building customer code.
Solution: Avoided an issue with conflicting symbols by not including the netdb.h system header file when building customer code.
Vortex OpenSplice 6.4.2p1
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.2p1
|Fixed startup failure with DDSI if configured to run using a single UDP unicast port
DDSI would not start when configured to use a single UDP unicast port.
Solution: Behaviour fixed.
|OSPL-5413||When using DDSI, the "dispose all" command was transmitted best-effort
The QoS used for publishing a "dispose all" command throughout the domain caused it to be sent best-effort when using DDSI, creating the possibility of it reaching only a subset of the nodes. The different design of RT networking ensured that systems based o RT networking did not run this risk.
Solution: The QoS has been changed to reliable.
|OSPL-5416||The DCPSHeartbeat writers should be best-effort
For the OpenSplice-specific DCPSHeartbeat built-in topic, best-effort relability suffices. In OSPL V6.4.1 version it was changed to a reliable writer, which slightly affects behaviour when used with the RT networking protocol, in cases where the network is overloaded or very unreliable, as delivery of the DCPSHeartbeat may be blocked by preceding messages.
Solution: The DCPSHeartbeat writer QoS is once again best-effort
|OSPL-5450||Deserialiser can incorrectly reject valid input because of an erroneous bounds check
An issue in the CDR deserialiser can cause a valid input to be rejected by a sequence bounds check. The affected components are DDSI, durability with a KV persistent store, RnR with binary storage, and RT networking when using compression (not the legacy compression).
Solution: The check has been fixed.
|OSPL-5460||Potential misaligned access in the CDR deserialiser for 64-bit objects
The CDR deserialiser could access 64-bit objects without ensuring the access is properly aligned. On most platforms, and most notably on the x86 and x64 platforms, such misaligned accesses are entirely legal, but on some platforms, they cause a misaligned access exception.
Solution: The CDR deserialiser now avoids misaligned accesses.
Vortex OpenSplice 6.4.2
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.2
OpenSplice should support hibernation
Modern platforms support the concept of hibernation and resuming. When hibernating, all processes are suspended and the complete RAM is written to permanent storage and during resuming that information is written back into RAM again with the purpose to have all processes continue where they left off before hibernation. However, during the period a system is hibernated, time elapses and as a result software may face time jumps when resuming again. OpenSplice is not able to cope with these time jumps, resulting in the potential termination of services or even a complete shut-down of the middleware. OpenSplice needs to be able to cope with hibernation to allow the product to be used in environments that rely on that functionality as well.
Solution: The various notions of time have been updated throughout the entire product allowing it to cope with time jumps as well as resuming after hibernate/suspend.
|OSPL-4196||Timestamps on WinCE aren't guaranteed to be represented in UTC
OpenSplice internally uses the WinCE GetLocalTime() operation, which returns time in local time-zone. Depending on time-settings of the operating system, this time may not match UTC, which is used on other platforms.
Solution: The implementation now ensures the time is represented in UTC on WinCE as well.
|OSPL-4325||Error log about failure to remove DBF file on Windows after domain shutdown
An issue with termination of OpenSplice services could result in a failure to remove the database (.DBF) file and corresponding message logged to the ospl-error logfile.
Solution: The termination issue was resolved so that the database-file can be removed.
|Shared Memory not detecting terminated or killed processes
Terminated or killed processes on Windows are not detected, which may lead to corrupt shared memory and will then not update the liveliness state of writers.
Solution: Updated the implementation of the shared memory monitor for Windows, using specific Windows API calls. Now the termination of a process is detected by the Splice daemon and proper action is taken to clean up after termination of the process. This might lead to a shutdown of Splice in case the terminated process was modifying shared memory when terminated, as shared memory is corrupted in this case.
|When durability terminates, the durability service should try to persist as much data as possible in case the persistent data queue still contains samples to persist.
Until now a durability service that terminates does not store persistent data that is waiting to be persisted. An improvement to this behavior is to try and persist as much of the remaining data as possible without exceeding the ServiceTerminatePeriod. Persisting as much data as possible is a best-effort attempt to save valuable data in case a durability service is terminated.
Solution: The durability service uses part of the ServiceTerminatePeriod to store remaining data that has not yet been stored at the time the durability service starts to terminate.
|RMI: Interface unregistration problem
When an interface is unregistered then the runtime is shutdown, a NullPointerException was raised, and any attempt to register this same interface a second time fails and leads to "Interface X already registered".
Solution: The interface is now properly removed from the runtime interface registry.
|OSPL-4696||Restarting an RMI runtime causes failure
On Runtime stop, in CInterfaceRegistryImpl.java, the m_Reader reader is closed (directly or indirectly), but the field is not set to null. So after restart, the CInterfaceRegistry tries to resuse the reader with no success.
Solution: Fixed the m_Reader in clear method.
|OSPL-4871||Reference guide update for WriterDataLifecycleQosPolicy
WriterDataLifecycleQosPolicy missing autounregister_instance_delay and autopurge_suspended_samples_delay attributes.
Solution: All reference guides have been updated with the descriptions.
|OSPL-4960||When Node Monitor is started it should publish all the enabled samples immediately
When Node Monitor is started, the NodeStats and NodeInfoConfig writers should publish all the enabled samples immediately and have their DURABILITY QoS set to non-VOLATILE so that late-joiner DataReader configured also to non-VOLATILE to get the last sample per key, rather than wait (potentially for a long time) until all the intervals have elapsed after nodemon startup.
Solution: The NodeStats and NodeInfoConfig topics' durability QoS policy kind was changed from V_DURABILITY_VOLATILE to V_DURABILITY_TRANSIENT.
|Not all signals properly handled when using pthread_kill(...)
When using pthread_kill(...), for example to abort a process, the process would continue to run.
Solution: The signalhandler has been modified so that signals generated with pthread_kill(...) are properly handled too.
|OSPL-5147||User should be aware that a runtime installation of OpenSSL is required for OpenSplice licensed features and/or ddsi2e and snetworking - an update
Addition of TLS in ddsi2 removes the static link to OpenSSL in previous versions of OpenSplice on non-windows systems.
Solution: The requirement for the OpenSSL runtime now only applies to ddsi2e and snetworking on non-windows systems.
|Disabling the Java shutdownHook
Solution: The Java shutdownhook, used to clean up all created entities that have been created by a java application, but have not been cleaned up during the execution of the application can be disabled by a new introduced system environment property: "osplNoDcpsShutdownHook". See java reference manual for more details.
|Waitset associated with wrong DomainParticipant causes problems during clean-up
In case an application has multiple DomainParticipants participating in the same Domain and attaches a Status-, Read- or QueryCondition to a Waitset, the Waitset may be associated with the wrong DomainParticipant, because the algorithm to select one did it based on the DomainId instead of looking at the DomainParticipant associated with the Condition. This may cause problems when deleting one or both DomainParticipants and even lead to a crash of the application in some cases.
Solution: The DomainParticipant is now selected based on the DomainParticipant associated with the Condition.
|OSPL-5266||DDSI2 warns about a message without payload
DDSI2 used to emit warnings of the form "write without proper payload (data_smhdr_flags 0x2 size 0)" when receiving messages from a writer that have no content whatsoever. Such messages are allowed by the specification and hence should not result in a warning.
Solution: The warning has been removed, it is still logged in the trace.
|OSPL-5268||C-language binding used wrong type for the "subscription_keys" of the CMDataReader built-in topic
The C language binding accidentally used a sequence of strings where it should have been a single string to describe the subscriber-defined keys. This caused crashes for a C program trying to use the CMDataReader built-in topic, and additionally caused the C binding to differ from the other language bindings.
Solution: The type has been changed.
|OSPL-5321||Issue with DSCP SAC code-generation
The code-generation templates for typed DataReaders and DataWriters contained an error in the definition of 'FooDataReader_get_subscription_matched_status' and 'FooDataWriter_get_publication_matched_status' methods. Since these methods were not available users were forced to use the regular DDS_-prefixed methods, resulting in inconsistent code.
Solution: The issue has been solved and the correct definitions are now generated by idlpp
|OSPL-5335||DDSI can flag dispose/unregister messages from Caf� for CM topics as invalid
DDSI2 is designed to handle data containined "proper" payload, but some DDSI implementations in some cases do not provide a real payload, but only an alternative form of the key. DDSI2 translates these to well-formed payloads before interpreting them, but this translation was incorrect for some CM topics.
Solution: The translation table has updated to cover all cases.
|Unable to find an entry point in C# API
When using a call like GetDiscoveredParticipants in the C# API, an "Unable to find an entry point named 'u_instanceHandleNew' in DLL 'ddsuser'." exception could be thrown.
Solution: This issue has now been fixed by referring the entry point to the correct library.
|OSPL-5411||RT networking interoperability fix due to ospl-4345 (6.4.1) fix
At 6.4.1, RT networking interoperability with older versions was degraded.
Solution: RT networking now correctly handle version prior to 6.4.1.
Vortex OpenSplice 6.4.1p9
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1p9
|OSPL-3216||uilt-in CMParticipant Topic accessibility.
The CMParticipant built-in Topic should be accessible, to applications, through a built-in DataReader.
Solution: The CMParticipant built-in Topic and DataReader are added to all language bindings.
|OSPL-4157/12727||C DCPS generated API doesn't compile with g++
The C API no longer compiles when using g++
Solution: The fault has been fixed and g++ will now compile the C API again.
|Upon repeated stop/start of OpenSplice, the application received the error "Max connected Domains (127) reached"
In our user layer the connected domains are stored in an array of 127 items. When a domain is connected, the next entry is used, skipping entries that have been freed by a domain disconnect. After 127 connects, the end of the array is reached, which results in this error.
Solution: When a domain is disconnected, the entry in the array will be marked free. When a new domain is connected, the array will be searched from the beginning for a free entry, re-using locations that were in used and were freed.
|When the replace merge policy is invoked not all data is being replaced.
When a fellow durability service that acts as master for a set of namespace leaves, a check is needed to see if an alternative aligner is available for each of these namespaces. If no alternative aligner is found the merge state for the namespace must be cleared to ensure that a merge action is triggered as soon as a new aligner joins the network. This check should be carried out for all namespaces. However, once no alternative for a namespace was detected the other namespaces were (wrongly) not checked anymore, resulting in the fact that the merge states for these namespaces are not cleared, and no merge policy is triggered for these namespaces.
Solution: The code has been changed so that an alternative aligner is searched for all namespaces of the leaving fellow.
|OSPL-5197||CMSoap service can fail to terminate cleanly
The timeout handling in accepting new conditions in the cmsoap service could cause cmsoap to fail to terminate cleanly, instead automatically killing itself after the (configurable) service termination period.
Solution: The timeout specification has been updated to avoid this issue
|Tuner does not accept character code input for c_char fields.
If a user wanted to write a character value for a c_char field that was not found on the keyboard (like the NUL character or the LF character) there was no way to input it in the Tuner writer window.
Solution: The Tuner writer input for c_char fields now accepts octal character codes (eg. \000 for NUL, \012 for LF, \141 for 'a', etc).
|OSPL-5234||Possible crash of networking service when running out of DefragBuffers
The networking service could crash when it ran out of DefragBuffers and its garbage collector started releasing DefragBuffers. The crash happened when the garbage collector double released DefragBuffers that were still in use.
Solution: The garbage collector no longer double releases in use DefragBuffers.
|OSPL-5239||DDSI could crash when a thread is killed
When a thread was killed, where the participant was already deleted DDSI could crash trying to get subscriber/publisher out of this participant.
Solution: Before trying to access the subscribers or publisher, check if the participant is still valid.
|Using synchronous reliability could cause a crash due to a memory corruption
Due to missing locking on a part of the synchronous reliability administration, the memory could become corrupted causing a crash
Solution: Locking has been added to the relevant bits of the synchronous reliability administration.
|OSPL-5253||Recompilation rules broken for Java applications
Java applications generated before V6.4.0p5 wouldn't run on later versions and required code to be regenerated and compiled. This was mentioned in the release notes (OSPL-4333).
Solution: An overloaded constructor has been added that supports the old format of generated code, allowing the applications to be used according the the recompilation rules.
Vortex OpenSplice 6.4.1p8
|OSPL-4614 / 12392||
In some situations it is possible that a durability service processes pending messages from another durability service (a fellow) that has been removed recently, causing the fellow to be added again.
When a durability service is busy it cannot process any incoming message from a fellow immediately. In case a message from a fellow is received and the fellow is terminated before the message is processed, then the fellow will be removed as peer from the durability service. But when the pending message is processed the durability service notices that this came from a fellow that is not known, and will (wrongly) added again.
Solution: When a fellow has terminated, its address will be remembered for some while. Messages originating from a fellow with an address that has recently terminated will not processed, thereby preventing that a 'new' fellow is added.
|OSPL-4985 / 13053||When multiple master conflicts appear at the same time, only one of them is handled, resulting in incorrect merges of historical data.
In some cases it is possible that suddenly multiple nodes appear that are all master for the same namespace. This is for example the case when a firewall initially blocks traffic between 3 nodes A, B and C that all become master. When the firewall is disabled (thus enabling communication between all nodes) each nodes has 2 possible master conflicts. Both of these conflicts should be handled to ensure that data is correctly merged.
Solution: No master conflicts are dropped anymore, and successive master conflicts are being re-evaluated because they may have been invalidated by resolving previous master conflicts.
|OSPL-5125||Termination of DDSI2 in shared memory deployments on Windows causes warnings
DDSI2 creates various objects for its internal administration and its interaction with the OpenSplice kernel. Some of these were not released in the terminated path, causing the DDSI2 domain participant to not be deleted at the expected time because there were unexpected outstanding references to it. This would then lead to an apparent crash of DDSI2, which would be logged. The problem surfaced only on Windows because of the differences in the way atexit() is handled on Windows and on other platforms. On the other platforms, the domain participant would eventually be deleted properly for a clean shutdown.
Solution: All objects are now released explicitly and the DDSI2 domain participant is deleted as planned on all platforms.
Vortex OpenSplice 6.4.1p7
Fixed bugs and changes not affecting the API in Vortex OpenSplice DDS 6.4.1p7
DDSI TCP interoperability with Vortex Cloud Routing Service
When using OSPL clients with the Vortex Cloud where the routing service is involved then OSPL would fail to connect.
Solution: TCP DDSI sends correct ENTITY_ID sub message to discovery and routing services, enabling cloud based routing
Vortex OpenSplice 6.4.1p6
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1p6
|When durability is used in combination with DDSI and DDSI generates builtin topics, then durability may not align data because DDSI may drop durability messages.
A durability service assumes reliable two-way communication with other durability services. This assumption is not true anymore when DDSI generates builtin topics (see the
Solution: If DDSI generates builtin topics then the durability service will only respond to a nameSpacesRequest message if all readers of the remote durability service have been discovered. Because the durability protocol always starts with the exchange of namespaces, discovery of all remote readers is now guaranteed.
|OSPL-5068||release.com/.bat override OSPL_URI
If an OSPL_URI is set and then the release.com/.bat is executed, the OSPL_URI is over-ridden.
Solution: OSPL_URI is now honoured.
|OSPL-5130||DDSI2 did not terminate within ServiceTerminatePeriod
The DDSI2 service did not terminate within ServiceTerminatePeriod. The listen_thread was blocking on 'accept()' call.
Solution: On termination wake the listen_thread so termination can continue.
|OSPL-5141||DDSI2 TCP and TCP with SSL not consistent on blocking read/write
DDSI2 TCP and TCP with SSL not consistent on blocking read/write leading to inconsisten behaviour.
Solution: A common timeout mechanism has been implemented for both TCP and SSL read and write operations that would block. TCP configuration options "ReadTimeout" and "WriteTimeout" have been added, these specify the timeout on a blocking read or write call, after which the call will be abandoned and the corresponding transport connection closed. These configuration options replace "ReadRetry", "ReadRetryInterval" and "WriteRetry" which have been removed.
|Possible compiler warning in the C++ language binding
An initialiser used in the initialisation of a number of mutexes internal to the C++ language binding can cause compiler warnings.
Solution: The initialisation has been modified.
Vortex OpenSplice 6.4.1p5
|OSPL-4983-1||DDSI over TCP interoperating with Vortex Caf\E9 may drop connection after 10s
When a Vortex Caf\E9 process was connected to an OpenSplice Enterprise node using TCP where Caf\E9 was acting as a TCP client and Enterprise as a TCP server, Caf\E9 could consider the Enterprise node dead because it was not receiving participant discovery data as it expects.
Solution: Participant discovery data is now properly distributed over TCP connections.
Vortex OpenSplice 6.4.1p4
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1p4
|OSPL-4983||DDS using TCP can not handle high data load
Under high load, DDSI TCP connections could be dropped and recreated, losing samples in the process and with various long timeouts interfering with the correct operation. This was caused by incorrectly handling a full socket buffer at the start of writing a message.
Solution: The code now correctly accounts for such events.
|OSPL-5118||Performance improvement for CDR deserialisation
The CDR deserialisers all used a sub-optimal way of allocating sequences. Especially for sequences in small samples sent at a very high rate, this had a significant performance impact.
Solution: The allocation of sequences has now been changed to use a faster method.
Vortex OpenSplice 6.4.1p3
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1p3
|OSPL-5104||DDSI doesn't properly account transient-local data in its WHC
DDSI stores unacknowledged samples and acknowledged but transient-local samples in a writer history cache. The amount of unacknowledged data is kept track of, but this was done incorrectly for transient-local data. This in turn could cause DDSI to lock up, in particular when a large amount of discovery data needed to be sent.
Solution: The amount of outstanding unacknowledged data now reflects acknowledgements of transient-local data as well.
|OSPL-5105||DDSI can throttle a writer without ensuring an ACK is requested immediately
When the amount of outstanding unacknowledged data in a writer reaches a (configurable) threshold, the throttling mechanism blocks further data from that writer from being sent until the amount of unacknowledged data is reduced to below a (configurable) level. This requires the readers to send acknowledgements, which they are only allowed to do upon request from the writer. The last packet sent before the throttling often, but not always, includes a request for acknowledgements, and if it doesn't a 100ms delay is incurred.
Solution: The writer now forces out a request for acknowledgements before blocking.
|OSPL-5106||DDSI delays NACKs unnecesarily when requesting samples for the first time
DDSI2 follows the specification in delaying NACKs a little but only if the previous NACK was within the NackDelay interval. However, if it detects a need to request a retransmission of samples not covered in the previous NACK, waiting only introduces an unnnecessary delay.
Solution: The NACK scheduling now takes into account the highest sequence number in the preceding NACK.
Vortex OpenSplice 6.4.1p2
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1p2
|High memory by DDSI2 on WinCE.
On WinCE, DDSI2 could require large amounts of memory when transmitting large samples because of an issue in platform-specific code.
Solution: The underlying issue has been addressed.
|The use of a content-filter topic causes a memory leakage.
When creating a content-filter topic a memory leakage occurs when evaluation the filter expression. The key field list used when evaluating the filter expression is not released causing the memory leak.
Solution: Release the key field list when evaluating the content-filter.
|DeadlineQosPolicy when DataWriter is deleted keeps triggering
Reader listener/waitset keeps getting triggered for deadline missed after instance is disposed and writer deleted. A v_dataReaderInstance was unintentionally re-inserted in the deadline list right after it was intentionally removed.
Solution: Removed v_dataReaderInstance re-insert
DDSI TCP on Windows fails with large messages
When using large message payload, DDSI TCP would hang because the TCP buffer would overload.
Solution: Error handling improved for blocking TCP write functions.
|OSPL-4974-1||Change default value for DDSI TCP configuration property NoDelay
Solution: Changed the NoDelay DDSI TCP configuration property to true (from false) to reduce jitter.
Vortex OpenSplice 6.4.1p1
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1p1
|In some situations it is possible that the durability service sends out responses to requests in the reverse order. As a resul recipients of these responses may perceive a "wrong" state of groups and namespaces.
Due to a threading issue it is possible that a durability service sends out responses to request in the reverse order. In particular, the state of groups and namespaces could be reversed, causing recipients to believe that a group is 'incomplete' while in fact the master has announced is completeness. In this case the recipients will wait forever to become complete.
Solution: Once a group is complete it can never announce its 'non-completeness' anymore. Also, order reversal of announcing namespace states has been prevented.
|In rare occasions a process could fail to detach properly from SHM due to a race condition
Due to a race condition in checking whether a live process is ready to detach, the process could conclude that it still had threads in SHM, causing the detach to fail unexpectedly. As soon as this was detected by spliced, the domain was brought down.
Solution: The race condition has been resolved.
|OSPL-5053||DDSI "malformed packet received" error with state parse:info_ts
DDSI verifies the well-formedness of the messages it receives, logging a "malformed packet" error if it is not. The message validator would reject a short timestamp even when the INVALIDATE flag was set on the submessage.
Solution: DDSI now accepts empty timestamp submessages with INVALIDATE set.
DDSI2 group instance leakage in shared memory
The DDSI2 service caused a small memory leak in shared memory fo each group instance written.
Solution: Memory is now freed.
DDSI stops when creating many readers/writers and no peers exist
The DDSI discovery protocol exchanges information on all endpoints, and does so by creating an instance in a reliable, transient-local writer (one for writers, one for readers) for each existing endpoint. An issue was discovered where this data is counted as unacknowledged data even when there are no peers, and creating readers/writers may cause DDSI to reach the maximum allowed amount of unacknowledged data in the writer. This in turn blocks various processing paths, and if there are no peers, there is no way out.
Solution: When there are no peers, the data is no longer counted as unacknowledged data.
|Deleting a writer does not free all shared memory allocated when creating it
Solution: Memory allocation is now tidied
|Readers not disposing after using built-in subscriber
After termination, even after calling 'delete_contained_entities()' on the DomainParticipantFactory, the tester showed the participant as disposed with active DataReader for built-in topics. The 'delete_contained_entities()' function did not delete all contained entities and the built-in subscriber was not deleted.
Solution: When calling 'delete_contained_entities()' on the DomainParticipant the built-in subscriber is now also deleted.
|Shared memory runs out after running a very simple application in a loop
A small memory leak (96bytes ::v_message
Solution: Memory is now freed.
|Shared memory leakage on Windows platforms
On Windows platforms our exit handler was registered but never called when 'get_instance()' was called from the customer application context, this caused memory leakage.
Solution: Our exit handler is now also called on Windows when our library is unloaded. However Windows terminate threads ungracefully which could still cause memory leakage if 'delete_contained_entities()' is not called before library unloading.
|When using wait_for_historical_data_w_condition with OR codition its possible that not all matching samples are returned.
When wait_for_historical_data_w_condition is used the evaluation of the condition is not correct. If the conditions consists of OR expressions then not all parts of the OR expression are evaluated.
Solution: The evaluation of the condition now walks over all OR elements of the condition and does not stop when it finds a match.
|OSPL-4727||DDSI discovery heartbeat interval too long
The DDSI specification gives a DDSI participant various ways of renewing its lease with its peers, one of which is a periodic publishing of a full participant discovery sample. To reduce the bandwidth needed, DDSI would instead send some other data, but this is not good enough to maintain liveliness with all other implementations.
Solution: DDSI now sends the participant discovery sample at an interval slightly shorter than the participant lease duration, which itself is taken from the OpenSplice domain expiry time, but never longer than the configuration SPDP interval.
RnR replayed data not arriving on remote nodes using DDSI
When RnR replays a recording it relies on the networking service to distribute the data from the RnR service to any remote data readers. The writer instance handles used by the replay did not match known writers for DDSI, hence DDSI was unable to determine where to send the data, and ultimately causing DDSI to drop the data.
Solution: RnR now creates local data writers and remaps the writer instance handles in the replayed data to correspond to these known writers, allowing DDSI to distribute the data throughout the network.
|OSPL-4962||Detecting which participant represents DDSI2 in a federation is difficult
DDSI2 itself acts as a participant in the system, and hence creates a participant at the DDSI level as well. For other DDSI implementations it may be useful to be able to detect which of the remote participants is a DDSI2 service.
Solution: DDSI2 has been enhanced to indicate in its discovery information which of the potentially many participants in a federation is the DDSI2 service itself. This enhancement is backwards compatible.
|OSPL-4963||Domain ControlAndMonitoringCommandReceiver/Scheduling/Priority setting applied incorrectly
The value configured for the ControlAndMonitoringCommandReceiver/Scheduling/Priority for the domain was applied to the ResendManager thread rather than to the CandM thread.
Solution: This has been fixed.
|OSPL-4974||DDSI TCP may fail under high load
Under high load a DDSI TCP connection may fail and would not recover.
Solution: The socket waitset has been made threadsafe. Under UDP only a single thread accessed the waitset, under TCP multiple threads are used.
DDSI default socket buffer sizes increased
The socket buffer sizes have a significant impact on performance, and in particular having a small receive buffer size when data comes in at a high rate can be a performance bottleneck. Unfortunately, there is no agreement across operating systems about the default maximum size, and hence in the past DDSI defaulted to a smallish buffer.
Solution: The defaults and the warning policy for failure to set the buffer size has been changed. Without specifying a receive buffer size (or "default"), DDSI will default to requesting 1MB, but accept whatever the kernel provides. Explicitly specifying a buffer size will still result in an error message if the kernel refuses to provide it.
|OSPL-5002||Invalid messages accepted by DDSI2
The DDSI2 service verifies the well-formedness of the incoming messages, but two issue in the verification were discovered: firstly, it would accept invalid sequence numbers in data samples, even though the specification explicitly states such messages must be rejected, and secondly, it did not correctly verify that the start of the inline QoS or payload was indeed within the message.
Solution: Both points have been corrected.
Vortex OpenSplice 6.4.1
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.1
|When RTnetworking compression is activated the number of network frames is not reduced.
When RTnetworking compression is activated then compression is applied to each network frame which causes that the size of the frames is reduced but the number of frames remains the same.
Solution: When compression is activated the compression of the data is performed before fragmenting and the packing of data messages is performed before compression is applied to the data. Which results in a better compression ratio and a reduced number of fragments (packets).
|OSPL-2023||DataReaderView doesn't take modified default DataReaderView QoS when created (sacpp & ccpp)
The default DataReaderViewQos can be changed by calling set_default_datareaderview_qos() on the related DataReader.When a DataReaderView is created with DATAREADERVIEW_QOS_DEFAULT, then it should take the QoS that was set with set_default_datareaderview_qos(). This didn't happen, the DATAREADERVIEW_QOS_DEFAULT was used as QoS.
Solution: During reader->create_view(), check if DATAREADERVIEW_QOS_DEFAULT was provided. If that's the case, then the readers' internal default DataReaderViewQos is used instead.
|The 'autopurge_dispose_all' value is added to the ReaderDataLifecycleQosPolicy.
When calling dispose_all() an a Topic, then related readers will receive a disposed notification for all disposed samples. For performance reasons, it should be possible to block those disposed notifications and only trigger the on_disposed_all() notification on the ExtTopicListener. Also the samples should be disposed automatically. This should be controlled on a 'per reader' basis.
Solution: The 'autopurge_dispose_all' is a new value in ReaderDataLifecycleQosPolicy, which is part of the DataReaderQos. When set to 'true' (default is 'false'), it makes sure that all related reader samples are purged when dispose_all() is called on a Topic. The related reader will not be notified that the samples have been disposed by a dispose_all().
|If master selection in durability takes a long time the system could stall
At start up, the durability service tries to determine masters for all its namespaces. If during the master selection phase fellows are removed this also triggers master determination. In this case, the latter thread waits for the master selection lock which has been taken by the first thread. If the first thread takes a long time to determine masters for its namespaces (which is typically the case for large systems with many nodes and namespaces) then the second thread is stalled for a very long time. If this time exceeds the thread liveliness assertion period then the second thread is declared dead, which may lead to system failure.
Solution: While the second thread is waiting for the first thread to release its lock, liveliness of the second thread is asserted regularly. This ensures that the second is not declared dead, even if initial master selection by the first thread takes significant time.
|Idlpp for C# generates incorrect code when using const to const assignment
When generating code from an idl-file which contains a const variable which is used for assignment to another const variable, idlpp for C# generates incorrect code. The latter assignment was generated as (null), because the C# implementation was missing for this case.
Solution: Adjustements made to idlpp tool, it now implements the const to const assignment case.
RTNetworking supports setting the Differentiated Services Code Point (DSCP) field in IP packets on windows
To provide the setting of the DIffserv (DSCP) field in the IP packets the networking service used the IP_TOS option for this purpose. However, since Windows Vista and Window server 2008 setting the IP_TOS option is no longer supported. To use the Diffserv functionality on these versions or later versions of windows the new QoS2 API has to be used.
Solution: The networking service must map a configured Diffserv value on one of the Traffic Types supported by the windows QoS2 API. When administrative privileges are avalaible then the configured Diffserv value is set on the traffic flow associated socket which will result that the Diffserv field of the IP packets is set to the configured value. When no administrative privileges are available then the Diffserv field will be related to the Traffic Type that is selected
|When using OSGi without proper exports a crash will occur.
When using two OSGi bundles, one contains dcps.jar (dcpssaj-osgi-bundle.jar) and the second by idlpp generated typed code without exports and an application. When the second bundle accessed the first bundle which then tried to access a class form the second bundle using the JNI FindClass function a crash occurred because of the thrown exception.
Solution: To prevent this crash from happening the exceptions thrown by the JNI FindClass function are now caught and a log message is written to the error log explain what when wrong.
|Unclear logging when services are killed because of elapsed serviceTerminatePeriod
When the serviceTerminatePeriod elapses during shutdown the ospl-tool logged an ambiguous message to the info log. The Splice daemon should have logged a clearer service kill message, but this was never reached because the ospl-tool would terminate the Splice daemon prematurely.
Solution: Clarified service kill messages for both ospl-tool and Splice daemon. Increased ospl-tool wait period before sending kill signal to Splice daemon process group. Additionally, durability now logs messages when it fails to assert its liveliness within the expiration period.
|The adminQueue may overflow when receiving thread is busy processing messages and the sending thread is not scheduled in time.
The sending thread is responsible for transmitting ACK messages. For that purpose the receiving thread uses the adminQueue to inform the sending thread of the data messages received. When the receiving thread is busy processing received data the adminQueue may get full because the sending thread (lower priority) is not scheduled in time.
Solution: The receive thread is made responsible of sending the ACK messages. This has the effect that the timing requirements of the sending thread are relaxed.
|When using edge case resource limits, OpenSplice didn't behave as expected.
When using a resource setting of max_samples=1 and history = KEEP_LAST for the reader, samples weren't overwritten while as expected but an error was returned.
Solution: The reader and writer resource limits are now better checked so that samples are overwritten when allowed.
|OSPL-4423||User should be aware that a runtime installation of OpenSSL is required for OpenSplice licensed features and/or ddsi2e and snetworking.
Addition of TLS in ddsi2 removes the static link to OpenSSL in previous versions of OpenSplice on non-windows systems.
Solution: At runtime an installation of OpenSSL is required for licensed features, however on most systems this is standard.
|The use of sequences is not supported in multi-domain applications.
The issue is located in the copy-in routines generated by the IDL pre-processor. The copy-in routines are used when the application performs a write operation. To improve the performance of the copy-in routines these routines cache some type information about contained sequences. This causes a problem when writing the same type to multi-domains because the cached type information is domain specific.
Solution: An option (-N) is added to the IDL pre-processor which disables the type caching in the generated copy-in routines.
|OSPL-4530||Improved DDSI robustness
In high-throughput situations, DDSI2 could behave quite badly, with retransmit storms and/or temporarily considering reliable readers unresponsive and treating them as effectively best-effort. The long default participant lease duration caused these effects to linger for a long time even after restarting part of the application.
Solution: The risk of retransmit storms and the associated effects has been reduced by improving the mechanism used to control the rate of retransmit requests and improved control over the amount of outstanding unacknowledged data, by configuring bytes rather than samples. The default participant lease duration is now controlled by the ExpiryTime configured for the domain, and will therefore typically have a more reasonable value.
In cases where DDSI generates builtin topics there is no need for durability to align the builtin topics.
DDSI can discover entities and could generate builtin topic information. This enables non-enterprise nodes in the DDSI network to become visible. Also, in cases where DDSI generates builtin topic information there is no need for durability to align builtin topic information, which saves bandwidth.
Solution: Durability will not align builtin topics in case ALL DDSI services generate builtin topics and no native networking services are configured. To force DDSI to generate builtin topics DDSI can set their value set to TRUE. In all other cases durability will align builtin topics to ensure backwards compatibility.
|A sample predating the oldest sample in the history of a TRANSIENT or PERSISTENT instance could overwrite a newer sample
Due to a fault in the mechanism used to insert a sample in the history of a TRANSIENT or PERSISTEN instance, a sample predating the oldest sample in the history would replace the oldest sample instead of being discarded. This would cause late-joining readers to observe an inconsistent history.
Solution: The mechanism has been fixed to properly order the samples, so the oldest sample will be discarded when it doesn't fit in the history instead of overwriting the oldest sample.
|After durablity alignment of a dispose_all message only the first instance is NOT_ALIVE_DISPOSED
When the durability service needed to align a fellow, a stored dispose_all message was only sent to the first instance for the topic. The dispose_all sample for following instances was incorrectly marked as duplicate because it only tested for writerGid, writeTime and sequenceNumber.
Solution: The durability service now also checks samples for keyvalues before marking as duplicate.
|After durability alignment of a dispose_all message and a delete_dataWriter the instance_state does not go to NOT_ALIVE_NO_WRITERS
When the durability service needed to align a fellow with a dispose_all message, an implicit registration message is created for a NIL writer. This NIL writer was never removed causing the dataReader instance_state to remain ALIVE.
Solution: The durability service now also sends an implicit unregister message after it sent a implicit register for a NIL writer.
|When a dataReader instance_state is NOT_ALIVE_NOWRITERS and it receives a dispose_all the instance_state does not transition to NOT_ALIVE_DISPOSED
When a dataReader is in NOT_ALIVE_NOWRITERS instance_state and the last action was TAKE the instance pipeline is destroyed. The group updates it's state when a dispose_all message is received, but could not forward it to the dataReader. The dataReader instance_state did not change.
Solution: The group now checks all dataReader instances, when a dataReader has NOWRITERS implicit registration and unregister messages are send so that the instance pipeline is reconstructed and destroyed after the dispose_all is received by the dataReader.
|Durability does not handle merge policy correctly in some cases with terminating and (re)connecting fellows
If durability detects a fellow is terminating, it removes the fellow from its administration. However, when receiving a new message from the fellow after it has been removed, resulted in adding the fellow to the administration again (even though the fellow is already terminating). This triggers faulty merge actions that cannot be completed.
In case a merge action needs to be performed, durability sometimes needs to wait until to fellow to merge with reaches a certain state. Durability periodically checks whether that state has been reached. However, when the fellow terminates before it reaches that state, durability continues to wait for the desired fellow state even though it is clear that the fellow will never reach that state.
When durability decides to remove a given fellow from the administration, it needs to check whether the 'merge state' of its name-spaces need to be cleared. This is required to ensure that a merge action is triggered once a new master is elected after the original one disappeared. A potential dummy fellow parameter (with no name-space information) was used in some cases to determine the name-spaces that require resetting. Obviously, name-space information may be missing from such dummy parameters causing name-space merge-states not to be reset. As a result durability may conclude that no merge action is required when a new master is elected.
To prevent adding terminating fellows, the various paths that lead to adding a fellow to the administration have been modified to refrain from adding it if the fellow is in terminating or terminated state.
The algorithm has been modified to cancel the merge when the fellow terminates or gets disconnected before it reaches the desired state.
The actual fellow that is removed and not the dummy one is now used to determine further actions.
|A deadlock can appear when durability tries to use the KV store during initial alignment, which causes durability to halt forever
During initial alignment access to the KV store may be required. In this phase of the process two threads a competing for two resources, the durability administration and the store. These threads try to lock the resources, but in a different order. This could lead to a deadlock of the durability service.
Solution: The KV store does not require a lock on the durability administration anymore. This will prevent the deadlock.
|Multiple Ctrl^C can cause a crash in the exit-request handler.
Termination requests received in rapid succession could cause a crash in the exit-request handler.
Solution: The handlers installed by services are now executed only once.
|The durability persistentDataListener thread failed to make progress when using the KV store, causing the system to terminate and execute its failure action.
The KV store uses transactions to persist data. In case there are many samples to persist, the transaction can take a very long time. This may even outlive the time to assert liveliness. When the time to complete such transaction exceeds the time to assert liveliness the reposnsible thread is declared dead and no leases will be renewed anymore, causing the system to execute its failure action.
Solution: To prevent that the persistentDataListener thread cannot make progress two improvements have been implemented. The first improvement is to use the liveliness expiry time instead of the heartbeat expiry time to decide if assertion of liveliness has succeeded or not. The first is typically larger than the second, causing the system to implement a more relaxed liveliness assertion policy. The second improvement is to ignore liveness checking in case of potential intensive operations on the KV store such as commit and delete.
|OSPL Source build required MICO and had kvstore library names incorrect
Customers with access to OSPL Source Build noted a dependency on MICO for building source code and that some libraries links were incorrect.
Solution: MICO is now optional and links named correctly.
|When networking compression is used then occasionally an error "Received incorrect message" is reported.
When compression is activated in the networking service and when a compressed network frame that is received contains user data messages for which the type is not or not yet known on the node then the networking service is not able to de-serialize that user data message and should skip this message and continue with the next user data message in the frame. However in that case the buffer administration is not correctly updated resulting in the reported error and the rest of the frame to be dropped entirely.
Solution: The buffer administration has to be updated correctly when skipping a user data message for which the type information is not known.
|Overflow for network queue resulted in a stackoverflow during cleanup of network reader
Unregister messages were not being obeying max queue size.
Solution: Changed check for message size to reject a message when queue is equal or greater than max queue size as unregister messages can increase the queue size beyond max size.
|Reason for termination of domain not reported in all situations
The splice-daemon attempts to clean up shared resources of processes that terminated without cleaning them up. If it fails to do so, it does not report anything in the log files in some situations before stopping the domain. Additionally, if the cleaning up did not complete within 1 second, the splice-daemon assumed that cleaning up had failed.
Solution: Extra logging has been added to ensure the reason for stopping is clear for users. Furthermore, the time out for cleaning up has been slaved to the existing lease expiry time-out (//OpenSplice/Domain/Lease/ExpiryTime) instead of a fixed period of 1 second.
| Inconsistency between report level verbosity and reports for FATAL and CRITICAL verbosity
Reports at levels FATAL and CRITICAL are emitted as "FATAL ERROR" and "CRITICAL ERROR" respectively. This is not consistent and causes open and close tags with whitespace included (e.g., "
Solution: The reports are now emitted with text FATAL and CRITICAL, corresponding to the verbosity level.
Fixed bugs and changes affecting the API in Vortex OpenSplice 6.4.1
|get_all_data_disposed_topic_status() method is now implemented
Solution: The get_all_data_disposed_topic_status() method has been implemented in C++ and Java language bindings.
Vortex OpenSplice 6.4.0p7
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p7
|Nested IDL modules not properly handled by the C++ RMI compiler
With rmipp, the handling of nested IDL modules was generating incorrect code.
Solution: A bug in the rmipp code generators has been fixed, so that nested IDL modules map properly onto nested C++ namespaces.
|Deadlock in listenerEvent when terminating domain and calling delete_contained_enties
A deadlock could occur when an application created a domainParticipant with listeners and the domain was terminated while the application called deleted_contained_entities. The listernerEventThread would remain in infinite wait for never signaled waitset, because notify fails due to no longer running splice daemon.
Solution: The listenerEventThread now has a polling wait loop, allowing it to detect stop requests.
Vortex OpenSplice 6.4.0p6
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p6
|The cmsoap service crashes when there is no network connection present at startup of the service.
The cmsoap service tries to determine the IP address through which it can be reached. These IP addresses are set in the user data field present in the DCPSParticipant builtin topic which enables other tools to connect to the soap service. However when no IP address a crash occurs because of access to uninitialized memory.
Solution: When the cmsoap service cannot detect an IP address it should use the loopback IP address instead.
|The write method incorrectly returns TIMEOUT in case there are not enough resources available or can be freed in time.
When the write method detects that there are not enough resources available and it will not be possible that resources will be available in time, e.g. max instances exceeded is should return OUT_OF_RESOURCES instead of TIMEOUT.
Solution: In case the max instance resource limit is exceeded return OUT_OF_RESOURCES
|Deadlock in parallel demarshalling termination
When starting and stopping parallel demarshalling within a short time window it was possible that the parallel demarshalling termination was stuck in a deadlock. Not all spawned threads would terminate, because the terminate flag was reset before all parallel demarshalling threads were operational.
Solution: The API set_property function with property name parallelReadThreadCount is now blocking until all parallel demarshalling threads are started and operational and the terminate flag is now reset upon (re)start of parallel demarshalling.
|"FATAL ERROR Open Splice Control Service status monitoring failed. Exiting." logged when sending signal to blocking OSPL tool.
When "ospl -f start" was executed and a signal was sent to the OSPL tool, a FATAL ERROR message was logged. Not a real FATAL ERROR because the part of the OSPL tool that monitored the liveliness of the splice daemon wasn't aware of incoming signals and logged a FATAL ERROR while the splice daemon terminated normally.
Solution: Made the part of the OSPL tool that monitors the liveliness of the splice daemon aware of termination caused by a received signal
|When DataWriter exits unnaturally LivelinessStatus is incorrect.
When DataWriter exits unnaturally the LivelinessStatus was updated incorrectly when the DataWriter was Alive, this caused an illegal state transition.
Solution: LivelinessState change for unnatural DataWriter exits now use the last known state before transitioning to DELETED.
|OSPL-4509||DDSI2E now accepts DDSI2 configurations
Solution: DDSI2E required that the configuration in the OpenSplice XML configuration file was tagged "DDSI2EService", but this made it impossible to switch to the DDSI2E service without changing the configuration file. DDSI2E now also accepts configurations under the DDSI2Service tag.
Vortex OpenSplice 6.4.0p5
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p5
|The 'ospl start' command can exit before the DDS Domain is up.
On somewhat slower systems, the 'ospl start' command can exit before the DDS Domain is up. This would mean that creating a DomainParticipant immediately after 'ospl start' can fail.
Solution: The 'ospl start' command now waits until the DDS Domain is up before exiting.
|Java language binding fails with multiple package redirects
Java language binding fails when multiple packages are redirected and a type containing a type from a redirected package is registered.
Solution: Pass all redirect instructions for all types to the Java language binding.
Vortex OpenSplice 6.4.0p4
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p4
|OSPL-4116||A potential crash could occur when durability terminates
The durability service notifies the splice daemon too early in case durability is about to terminate. This could lead to a situation where the splice daemon already destroys shared memory while durability is still busy cleaning up objects and thus acessing the shared memory. This could lead to a system crash.
Solution: The durability services will now notify the splice daemon after it has cleaned up all objects and no access to shared memory is needed any more. Now the splice daemon can safely destroy the shared memory.
|JNI attach listener crash results in OpenSplice crash without any error report
When a crash occurs inside JNI call that attaches the listener thread to the application, OpenSplice crashes without a proper report
Solution: A proper error report is now generated so the customer knows what went wrong.
|Late joining readers not getting complete historical data when more than one networking service configured
When more than one networking service is configured duplicate message may be received. A reader will filter these duplicate message. However the group does not filter these duplicates and when the corresponding Topic QoS has a history depth greater than 1 this may result in that the duplicate message are stored in the group. This may cause that a late joining reader does not receive the correct number of samples.
Solution: The group should check if a duplicate message is received and drop the duplicates.
|DDSI2 not supporting QoS changes not documented
The DDSI2 networking service does not (yet) support QoS changes, instead silently ignoring them, but this was not mentioned in the documentation.
Solution: This limitation is now stated clearly in the DDSI2 release notes.
|OSPL-4320||Java 7 linux 64 bit crash with Listener example
When running the Listener example under linux 64 bit with java 7 it could crash.
Solution: The default listener stacksize is 64k, for java 7 this needs to be at least 128k as a result of this the default listener stacksize is increased to 128k
|Deserialisation issues with Java CDR-based copy-out, high-performance persistent store and RnR binary storage
The CDR serialiser used for Java CDR-based copy-out, the new high-performance persistent store and RnR binary storage could introduce incorrect padding in the CDR stream under some circumstances. To a reasonable approximation, this requires a type of unbounded size, or one where the maximum size is several times the minimum size, AND where the content of the data results in a serialised size larger than 16kB, AND where a string, sequence or array with alignment of less than 8 bytes requires a new block at a time the CDR stream is not aligned to a multiple of 8 bytes.
Solution: The CDR serialiser now maintains the alignment of the stream when it switches to a new block.
|OSPL-4366||Durability service may crash during termination
During termination, the durability service stops its threads and cleans up its administration. Due to the fact the main thread cleans up some part of the administration that is used by the lease thread before ensuring that thread stopped, that thread may access already freed memory which may cause the service to crash.
Solution: The main durability service thread now ensures the lease thread has stopped before freeing the administration.
|OSPL-4390||SOAP service may crash when concurrently using and freeing the same entity
The SOAP service allows PrismTech tools to connect to a node or process remotely. Due to the multi-threaded nature of the service, multiple requests can be handled concurrently. When the same entity is concurrently accessed and freed during two or more requests, the service may crash due to the fact one of the threads is trying to access already freed memory.
Solution: The internal API of the SOAP service has been re-factored to claim an entity when it is used and release it afterwards. When an entity is freed when one or more claims are still outstanding, new claims are denied and the actual deletion is postponed until all of the outstanding claims have been released.
|OSPL-4421||Error reports about instance handles are mixed up
Each call that has an instance handle parameter as well as a sample parameter on the DataWriter entity (like for instance the write call), validate whether the provided instance handle belongs to the DataWriter and if so validates whether the key-values in the sample match the key-value that is associated with the instance handle. If one of these conditions is not true, an error is reported and the call fails. However, the errors that are printed in the two failure cases have been mixed up causing the wrong error message to be reported in both these cases.
Solution: The error reports have been updated to match the actual error that occurred.
Vortex OpenSplice 6.4.0p3
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p3
|The read_w_condition may incorrectly return no data when a read_next_instance_w_condition is called before.
The read_next_instance_w_condition may incorrectly set the no data property of the associated query to indicate that no data matches the query, This may cause that a following read_w_condition may return no data when data is available.
Solution: The read_next_instance_w_condition should only set the no data property of the associated query when a complete walk is performed on all the instances.
|A crash of the durability service may occur when samples containing strings with non-printable characters are stored in the XML persistent store.
When the persistent XML store contains samples with strings containing non-printable characters then the durability service by crash because the layout of the XML storage file is not as expected.
Solution: The XML serializer used by durability to serialize the samples for the XML store should escape the non-printable characters.
Vortex OpenSplice 6.4.0p2
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p2
|The notification of a sample lost event by the networking service may result in a crash of the networking service.
When the networking service detects that samples have been lost it tries to notify the corresponding readers of this event. The sample lost event is recorded in the status of the reader. However there are internal readers used by Opensplice which do not have an associate status object causing the crash of the networking service.
Solution: Apply all internal readers with an status object.
|Ospl reports an error if no persistent file is present and using KV Store.
With KV Store, at ospl startup, if no persistent file is available, ospl reports an error: getset_version: read of version failed.
Solution: The error message has been fixed. It was not required. No behavior change.
|Crashes due to shared memory allocator issue
A refactoring of common code introduced an issue in the shared memory sub-allocator dealing with allocating "large" objects that could result in crashes or reports of heap corruption under high-load scenarios. For crashes, this typically (but not necessarily) involves stack traces involving the "check_node" function.
Solution: The specific changes have been reverted until they can be corrected and re-tested.
|Nullpointer exception when creating a reader/writer using the Tuner
When using the Tuner and creating a reader/writer it is possible to get a nullpointer excetion when selecting a different topic from the pulldown menu.
Solution: The nullpointer is now being caught and will no longer appear.
Vortex OpenSplice 6.4.0p1
Fixed bugs and changes not affecting the API in Vortex OpenSplice 6.4.0p1
|Inconsistent behaviour of service when handling signals.
The service should handle asynchronous signals like SIGQUIT or SIGTERM as normal termination requests which should not trigger an failure action. However the handling of these termination request is not correct, which may result in a normal termination or may result in an exception which triggers the failure action.
Solution: When a service receives a termination request signal like SIGQUIT or SIGTERM it will initiate a normal termination of the service and will not trigger an failure action. When the service receives a synchronous signal like SIGSEGV as result of an exception or send asychronously to the service then the service should detach from shared memory and trigger the failure action.
|Shared memory consumption would increase to unacceptable high levels when using KV persistency
When KV persistency is enabled shared memory consumption would reach unacceptable high levels. This was caused by the following two phenoma. First, the StoreSessionTime configuration option was not respected causing the system to store KV samples as long as there are samples available. Second, an inefficient algorithm to store samples on disk was used, resulting in (expensive) disk access for every sample. Together, these phenoma caused the system to pile up samples in memory that cannot be stored on time.
Solution: A more efficient algorithm is used to store samples on disk, resulting in disk access for a set of samples instead of an individual sample. This will boost performance when writing samples to disk. This reduces the risk to pile up data in memory that caused the unacceptable high level of memory consumption. Also, the StoreSessionTime is respected.
On windows, setting a long lease time in the configuration results in an large error log during ospl stop
When a termination request was made, some services with long lease times set in their configuration would generate on windows a large amount of error messages, as the termination was not acknowledged during the lease time.
Solution: The sleeping lease thread from ospld, durability and soap are signalled to stop during the sleep in case of a termination request.
|The network partition mapping of the expression . does not function correctly.
When the network partition mapping expression . is evaluated to find the best match the the global partition is selected instead of the configured network partition.
Solution: Exclude the global partition from the search for a best matching network partition and select the global partition only when no other network partition can be found that matches the mapping expression.
|The read_instance method sometimes returns ALREADY_DELETED but the reader entity has not been deleted.
This situation occurs when the instance that is referenced by the instance handle supplied to the read_instance has been deleted. For example when the instance has become disposed and unregistered. In that case the instance handle becomes invalid. The return code ALREADY_DELETED is incorrect and should be BAD_PARAMETER to indicate that the instance is not valid anymore.
Solution: When the read_instance detects that the provided instance handle has become invalid then return BAD_PARAMETER.
|Linker error in custom library compilation for CORBA C++ cohabitation with V6.4.0
DDS_CORE is not set anymore in custom lib environment so cannot be used anymore by the linker.
Solution: Changed custom lib makefiles by replacing DDS_CORE environment setting by ddskernel.
Vortex OpenSplice 6.4.0
idlpp did not generate valid java-code when a union had a case called 'discriminator'
When an idl-union contained a case called 'discriminator', the generated java-code would contain two conflicting definitions for the 'discriminator' method. This method is always included in a class generated from a union to obtain the value of the union-discriminator. With a discriminator case an additional discriminator method is added with the same signature that returns the value of the discriminator field.
Solution: The solution is according to the IDL to Java specification which prescribes that the function returning the discriminator-value should be prefixed with a '_' if the union contains a case called 'discriminator'.
|Reporting does not include timezone information
In scenario's where nodes are joining and/or leaving a domain The timestamps in the default info and error logs did not include timezone information. When the timezone of a system is altered while OpenSplice is running, the reports may appear out of order.
Solution: To resolve any uncertainty the locale-dependent abbreviated timezone has been added to the date format.
|OSPL-1705/1713/1714||Durability service XML persistency handles topics with string keys incorrectly
If multiple string keys exist for a Topic that is being persisted by the durability service, samples for different instances may be interpreted as samples for the same instance causing potentially samples to be overwritten while they both are supposed to be maintained. Secondly, if one or more key-values contained new-lines, storage in XML was also done in a way that prevented the data from being republished correctly after system restart. Finally, if the string key was matching the "" closing tag in the XML implementation, samples matching this key would not be persisted in all cases.
Solution: Key-values are now escaped when storing them in XML. The change is backwards compatible meaning that the new version can cope with old persistent stores.
Strange error message from DDSI2 for truncated packets on Windows
On Windows, when a message is truncated because there is insufficient receive buffer space available, the error message produced by DDSI2 would be somewhat confusing, because Windows reports this as an error whereas DDSI2 assumed POSIX behaviour of treating this as an unusual situation rather than an actual error. The behaviour of DDSI2 was ok, in that it discarded the message regardless of the platform.
Solution: The error reported by Windows is now recognised and reported properly.
|OSPL-2485||Idlpp generated invalid java for a union with only a default case
When a union only contained a default-usecase, java-code was generated which did an invalid check on discriminator and therefore did not compile.
Solution: The discriminator-check is not valid when there is only a default case. The applied fix removes the check in this scenario.
OS_INVALID_PID is accepted as a valid processID in the abstraction layer
The OS abstraction layer functions os_procDestory and os_procCheckStatus accepted OS_INVALID_PID as a valid input. Especially in os_procDestory, which is able to send signals to processes, this could have caused undesired behavior.
Solution: The functions os_procDestory and os_procCheckStatus now return invalid when processID OS_INVALID_PID is passed.
|OSPL-2616||Internal change: File extension change for files generated for Corba co-habitation CPP
When generating from your idl file, the tao idl compiler would generated .i.
Solution: For the inline files these have now been changed to the default file extension used by TAO, which is .inl.
|OSPL-3042||The library versions of sqlite and leveldb supplied by Opensplice may conflict with system supplied builds.
An Opensplice delivery contains particular versions of the sqlite and leveldb libraries on which Opensplice is dependent. These libraries are installed in the Opensplice install directory. owever the versions of these libraries may conflict with newer versions which are available on the system on which Opensplice is installed.
Solution: The names of the supplied sqlite and leveldb libraries are made Opensplice specific by adding an ospl postfix.
|OSPL-3151-1||Receiving unidentifiable duplicate messages during durability alignment when using DDSI2
When using DDSI2 it was possible that during durability alignment a duplicate message was received which could not be identified as a duplicate because the sequencenumbers were different. DDSI2 increments the sequencenumber for each message it sends, this sequencenumber is unrelated to the message sequencenumber which is not communicated to the receiving node when using DDSI2.
Solution: Communicate the message sequencenumber to the receiving node. Add a PrismTech specific flag to the SPDP to indicates that the message sequencenumber is send. Add the message sequencenumber to all messages transferred using DDSI2. Based on the existence of the PrismTech specific flag copy either the message sequencenumber or the DDSI2 sequence number into the internal messages.
|OSPL-3151-2||Publication/Subscription matched logic incorrect
On every non dispose Publication/Subscription matched message (with compatible reader/writer) the Publication/Subscription matched count was incremented. Only with a dispose Publication/Subscription matched message the Publication/Subscription matched count was decremented.
Solution: Now the Publication/Subscription matched count is only incremented when it's noticed for the first time or when QoS settings have become compatible when it was not before. The Publication/Subscription matched count is decremented on dispose Publication/Subscription matched message or on Publication/Subscription matched message when QoS settings are no longer compatible.
|Limiting sample size with DDSI2
Solution: DDSI2 now allows setting an upper limit to the allowed size of the serialised samples, as an added protection mechanism against running into memory limits. The limit is applied both on outgoing and on incoming samples, and any dropped samples are reported in the info log. By default, the limit is 1 byte short of 2 GiB.
idlpp issues compiling in standalone C++ mode
Description: idlpp generated uncompilable code from anonymous sequence of sequences of basic IDL types following typedefs of those same basic types. It also produced incorrect definition of anonymous array slice types.
Solution: idlpp generates the correct code in these circumstances.
|OSPL-3463||Durability using KV persistency may report error while backing up
Description: When durability has been configured to use KV persistency and is backing up the persistent store an error may be reported when no data exists yet for a given name-space even though nothing goes wrong.
Solution: The error message is not reported any more.
|Potential crash during initial alignment after a dispose_all_data call.
Description: The dispose_all_data call creates specific samples that were not compatible with durability alignment. The durability service could not handle these samples, while there are possible scenario's where these samples get stored in a persistent store. The service incorrectly forwarded all initial alignment data to the networking service, which could result in a crash since it could also not handle these samples, which are meant for local delivery only. A crash could also occur if the dispose_all_data sample was the first sample to be received, which could happen because of order reversal during alignment or in combination with lifespan QoS on the corresponding data.
Solution: The durability service was modified to only exchange initial alignment data over the durability partition and not by directly delivering it to a networking service. Order reversal during initial alignment was changed such that samples are ordered first by timestamp, then by writer (instead of the other way around). Support was added for handling the case where it is still the first to be received, i.e. when the lifespan of data samples has expired.
|OSPL-3520||Opensplice host and target type for Linux OS's has changed from x86.linux2.6 to x86.linux and x86_64.linux2.6 to x86_64.linux
Description: The installation path will be affected by this change, the top level directory on a linux platform would have been
Solution: Before - PrismTech/OpenSpliceDDS/V6.3.2/HDE/x86_64.linux2.6/
Now - PrismTech/OpenSpliceDDS/V6.4.0/HDE/x86_64.linux/
|OSPL-3587||Durability exposes and aligns local DDSI2 partitions
Description: The DDSI2 service creates some local partitions formatted as '__NODEBUILT-IN PARTITION__' that are purely local to the federation. The durability service however aligns them with others causing the local data to be exposed to other federations as well.
Solution: The durability service now checks whether a partition has the aforementioned format and refrains from exposing them to other durability services.
|OSPL-3601||Extend support for network service data compression and durability datastores.
Description: Windows, Enterprise Linux and Solaris distributions now include support for zlib, lzf and snappy compressors in the networking service, and for LevelDB (not windows) and SQLite (not solaris) datastore plugins for durability.
Solution: Extra platform support added.
|OSPL-3603||Various internal trace messages are reported in ospl-info.log
Description: The ospl-info.log shows various messages about internal threads being started and stopped. These messages make no sense nor are they relevant to users. Also these message makes it harder to find the actual messages that are important.
Solution: These internal messages are no longer reported.
|Spliced could crash when terminating under abnormal circumstances
Description: The spliced exit handling must stop all OpenSplice threads accessing shared memory before detaching from the shared memory. However, if processes have been killed using the KILL signal, this did not always happen correctly because spliced would incorrectly assuming the shared memory to still be in use.
Solution: The termination code now ensures thread termination.
|OSPL-3644||Durability service may perform alignment multiple times
Description: The durability service aligns sample per partition-topic combination and in some cases could perform this alignment multiple times.
Solution: The durability service now checks actively whether it already performed alignment for a given partition-topic combination before initiating the alignment.
|Durability service does not terminate in time.
Termination hung because listener termination is unable to stop with active listener actions. Listener actions remained active because they were unaware of termination.
Solution: Listener actions are now aware of termination and stop
|Services are able to outlive the splice daemon
When the splice daemon terminated without the use of the ospl tool, it was possible that a service outlived the splice daemon.
Solution: The splice daemon now kills all services which remain alive after the service terminate period elapsed.
| After failure action systemhalt it was possible that shared memory was not cleaned-up/deleted.
When a service failed with failure action systemhalt set it was possible that shared memory and key-file were not cleaned-up/deleted. This occurred because the splice daemon incorrectly assumed that the died service was unable to decrease the kernel attach count upon its termination, the attach count was decreased twice causing failing calls to shared memory cleanup/deletion.
Solution: The splice daemon no longer assumes died services are incapable to preform proper termination, it now only decreases kernel attach count during termination when all services are terminated.
|OSPL_LOGPATH included in host:port check for tcp logging mode
Description: Log file names are checked for host:port combinations twice. The second check is done when the path prefix and log file name are concatenated, which leads to incorrect behavior if the value specified in OSPL_LOGPATH contains a colon.
Solution: Split prefix and log file name before checking for a host:port combination.
|OSPL-3786||Latency spikes on reliable channel
Description: On some occasions the latency on a reliable channel would spike periodically (at most once per resolution) due to a mechanism used to limit bandwidth kicking in even when the limit didn't need to be enforced.
Solution: The logic has been enhanced to only activate the mechanism when bandwidth needs limiting.
|More strict SAC idlpp sequence support functions creation
Description: SAC idlpp creates support functions for sequences (like allocbuf()). These are created when an idl file defines an actual sequence (like sequence) but also when the type is related to a Topic (in other words, '#pragma keylist' is added to the type) to be able to create readers and writers. When idl file A contains a topic Type and idl file B includes A and defines a sequence of that topic, then sequence support functions are created in both A and B result files. The effect is that the generated code doesn't compile due to 'multiple definitions'.
Solution: When creating a sequence, check if the sequence is within the same file as the type. If not, check if the type has a keylist related to it. If so, then the type is a topic and the sequence support function have already been created: do not create them a second time.
| DDSI uses more ports beyond those specified in the Deployment Guide
Description: The Deployment Guide describes exactly which set of ports is used by DDSI and how this set can be configured. Some versions of OpenSplice (6.3.x except 6.3.0) additionally used two or more (one more than the number of configured channels) kernel-allocated port numbers strictly for transmitting data.
Solution: The use of the additional ports has been eliminated and the behaviour is in line with the deployment guide again.
|OSPL-3853||Improve the performance of the waitset wait operation.
Description: The performance of the waitset wait operation can be improved by evaluating the conditions trigger status within the kernel layer.
Solution: Evaluate the trigger status of the conditions attached to the waitset within the kernel layer.
|OSPL-3860||Remove unnecessary allocation of a timestamp when updating the deadline administration.
Description: When updating the deadline information of a writer or reader a new timestamp is allocated. By using the timestamps already present in the corresponding sample the extra timestamp allocation can be removed.
Solution: Use the timestamps present in the sample when updating the deadline information of the corresponding instance.
|OSPL-3861||Improve the performance of the read/take operations by updating the corresponding administration without extra memory allocations.
Description: A read or take operation will update the reader administration. For this update memory is allocated. The performance of the read or take operation can be improved by removing the extra memory allocation.
Solution: Update the reader administration without allocating temporal memory during this update.
The Java QosProvider constructor may throw a NullPointerException
Description: When a parse-error occurs, the Java constructor explicitly throws a NullPointerException. This is not in line with the other API's and the language-mapping.
Solution: The QosProvider constructor doesn't throw a NullPointerException anymore. Instead the constructor always succeeds and subsequent invocations on the QosProvider will return DDS.RETCODE_PRECONDITION_NOT_MET.The API furthermore has more thorough error-checking within JNI. If an exception occurs, DDS.RETCODE_ERROR is returned instead of an exception.
|OSPL-3997||Networking defragmentation buffers refcount issue
Description: Static analysis of RTnetworking code revealed a potential issue with the administration of the defragmentation buffers. An atomically modified counter was accessed without atomic access, allowing a potential race-condition.
Solution: The counter is correctly accessed now.
|Workaround for issue when using Jamaica VM
Description: When OpenSplice is used with JamaicaVM, JamaicaVM crashes due to a differnece in how a jni call to NewStringUTF is handled.
Solution: A workaround is implemented in OpenSplice to assure that JamaicaVM does not crash anymore.
The fixed bugs and changes for the other V6 releases can be found on these pages:
- Fixed bugs and changes in Vortex OpenSplice 6.10.x
- Fixed bugs and changes in Vortex OpenSplice 6.9.x
- Fixed bugs and changes in Vortex OpenSplice 6.8.x
- Fixed bugs and changes in Vortex OpenSplice 6.7.x
- Fixed bugs and changes in Vortex OpenSplice 6.6.x
- Fixed bugs and changes in Vortex OpenSplice 6.5.x
- Fixed bugs and changes in Vortex OpenSplice 6.4.x
- Fixed bugs and changes in Vortex OpenSplice 6.3.x
- Fixed bugs and changes in Vortex OpenSplice 6.2.x
- Fixed bugs and changes in Vortex OpenSplice 6.1.x