Integration tools
Accurate progress indication of Data Publishing process
Previously, the progress indication for the Data Publishing initial load was not accurate because the progress was based on the number of tables only. If a large table was being published, the progress was not updated.
Now, the progress indication is more accurate. The total number of records in the tables is also taken into account. During the publication of large tables, the progress is also displayed.
Object Configuration Management - New architecture to improve Workflow performance
To enhance the performance of the Workflows, the LN Object Configuration Management now offers a new way of handling the checked-out records. You can now handle these records in two ways:
- Multi-Table Implementation: The checked-out records are stored in shadow tables. This is the original OCM architecture that existed until now. It remains supported.
- Single-Table Implementation: This is the new architecture. Shadow tables are not used anymore. The checked-out records are stored in the tables together with normal records. This can result in better performance on runtime within an OCM-related context, especially in combination with an Oracle database.
You can choose which implementation to use when deploying the OCM Model over a package combination.
The Deployments by Package Combination (ttocm0111m000) session contains a new field called Implementation. You may only change the Implementation for deployments with status Free or Replaced.
The Implementation determines how data definitions are converted to runtime as part of the Activate process.
Data Publishing Management - Initial load through Data Lake Batch Ingestion API
The LN Data Publishing Management has adopted a new API for publishing records from the Initial Load: the Data Lake Batch Ingestion API. This API is provided by Data Fabric and makes it possible to send the messages directly to Data Lake. If you switch from the current IMS interface to Batch Ingestion API, ION can be skipped in the process. This provides multiple advantages:
- Faster performance
- No temporary ION bottlenecks caused by thousands of messages from Initial Load
We strongly recommend that you switch to Batch Ingestion. You can seamlessly make this change in the Data Publishing Parameters (ttdpm5130m000) session under Publication Method. No additional configuration is required.
If you are publishing not only to Data Lake and you require LN data also in other applications, you can still switch to Batch Ingestion. The usual Data Flow -is LN -> ION -> Data Lake + Application. Instead of this usual Data Flow, you can use ION Data Flow to retrieve messages from Data Lake and then route them to your application. The new Data Flow is LN -> Data Lake -> ION -> Application. This is now the preferred routing.
If you do not use Data Lake and only send LN messages to other applications, you should keep the current Data Flow using IMS.
Because the Batch Ingestion API currently does not offer any ping endpoint, the Batch Ingestion is the selected Initial Load Publication Method, the only publishing capabilities to Data Catalog and IMS are checked. Publishing capabilities to Batch Ingestion API are not checked. You can verify whether the Batch Ingestion publication is correctly set up. To do so, publish, for example, one record from one table using the Publish Data (ttdpm5205m000) session.
button has been renamed to . If you click this button while