OPC UA File Transfer internals
This article describes certain internal aspects of OPC UA File Transfer implementation in QuickOPC.
Object type checking
Before performing the actual file transfer operation, QuickOPC libraries check whether the target object exists and is of the intended type (file or directory). Without this, some operations would not behave predictably and could return errors that do not describe the problem well.
Connection locking
OPC UA file handles are only valid in scope of the OPC UA session. If the client disconnects from the servers and establishes a new session, the previously obtained file handle is no longer (guaranteed to be) valid. Since QuickOPC normally automatically disconnects from the server when the session is not needed, this could cause a problem if the file handle was needed over a longer time period.
The file transfer implementation prevents this issue by locking the connection (forcing the session to stay open) until the file handle is disposed of.
Note that "forced" session terminations, such as those caused by long-lasting network disconnections, can still render the OPC UA file handle invalid, which will manifest itself by persistent errors returned from operations made with that handle.
Stream buffering
In OPC UA, it is important to prevent "chatty" communication: Operations should be grouped together and performed on larger bodies of data, if possible. This is because the communication between the client and the server can be (relatively) slow, and each service calls introduces its own time delay.
When OPC UA files are exposed as (.NET) streams, the developer is free to read any number of bytes from the stream, or write any number of bytes to the stream. It is not uncommon that the algorithm reads or writes small number of bytes many times in a sequence. This could lead to very poor performance with OPC UA, because every read or write requires a separate OPC UA service call. There is, however, nothing intrinsically wrong in accessing the stream in this way, and many algorithms with streams are abstract, written in such a way that they are not aware of the ramifications of the underlying technology.
In order to prevent performance degradation when stream data is accessed in small segments, QuickOPC (unless instructed otherwise) automatically buffers all operations on OPC UA streams in memory. The default buffer size is 4096 bytes (in version 2021.2). Specifying 0 for buffer size (e.g. in the QuickOPC methods that open the stream) turns off the stream buffering.
Stream expansion
The .NET stream abstraction allows seeking to a position that is beyond the current file length, and when new data is then written to the stream, the stream is required to expand accordingly. The methods defined in OPC UA file transfer behave (per OPC specifications) differently; with them, seeking to a position beyond the current file length is invalid.
QuickOPC provides correct stream expansion behavior even for streams based on OPC UA files. It detects the situation and automatically writes the necessary padding into the file.
Read/write chunking
QuickOPC can automatically split read and write operations that work with large amounts of data into smaller pieces. This is important in case there are limitations to the message size transferred between the client and the server, or when the server itself cannot process or provide data (byte arrays) over certain size. Without chunking, perfectly legitimate reads or writes on an OPC UA file could fail for implementation reasons that are outside of developer's control.
The maximum chunk size is defined separately for reads and writes. QuickOPC has default values for the maximum chunk sizes, based on experiments and experience. In addition, QuickOPC can adjust the maximum chunk sizes down, using the read/write size limits obtained from the server, or using the actually observed server (communication) behavior. This is described further below.
Note: In a way, read/write chunking is the opposite of stream buffering. However, the two functionalities do not normally collide, because the chunk size is bigger than the buffer size.
Metadata caching and model changes
(tbd)
Read/write size limits
(tbd)
Adaptive read/write sizes
(tbd)