Streaming data from Data Fabric

Note: Stream Pipelines is an add-on license to Data Fabric. To be able to access the feature you must obtain the related SKU. Reach out to your Infor representative for more information.

Stream Pipelines provides real-time streaming data processing and delivery capabilities for operational data access and reporting in cloud-based systems.

Users can easily model, deploy, and operate push-based data pipelines, with live monitoring and exception handling features ensuring reliable and efficient data transfer. Designed to enable users to leverage real-time data insights for informed business decisions, Stream Pipelines allows for seamless processing and delivery of data events to cloud-based systems, including relational databases like Aurora PostgreSQL. These capabilities offer users valuable insights that can drive business growth and success.

The Data Fabric streaming architecture, consisting of Streaming Ingestion and Stream Pipelines, provides end-to-end continuous data flow between data sources and delivery pipelines, enabling real-time processing. When data events are ingested for an object through the Streaming Ingestion method, Stream Pipelines processes them immediately and continuously without waiting for data objects to be stored in Data Lake for durability. This ensures fast and efficient data event processing, enabling users to derive valuable insights from their data in real-time.

A data event is a discrete unit of data that represents a change or update, and it can take the form of a record, row, message, or any other type of data structure that conveys information. Data events are often used in real-time streaming systems to represent changes or updates to data sources, and they are processed by streaming pipelines to enable real-time data processing and delivery.

While Pipelines are designed to process data events in real-time, they are also capable of processing data that comes through batch methods and from the Data Lake. This includes data ingested through the Data Fabric Batch Ingestion API or the ION Data Lake Flows. However, in the case of batch ingestion, the data may not be published in real-time from the data source, which results in a delay between the data event and the processing by the stream pipeline.

For more information about the ingestion methods, see Sending data to Data Lake.

Data Fabric Pipelines consists of these pages:

  • Stream Pipelines

    From the Stream Pipelines page, you manage and operate the stream pipelines to deliver data to your systems (destinations).

  • Destinations

    On the Destinations page, you provide the connection details to your target systems. You can also view the Infor-provisioned destinations to use in Stream Pipelines.

  • Replay Queue

    Replay Queue is where you discover, manage, and replay events that failed to be delivered with a pipeline.

To access the Stream Pipelines feature, open the Data Fabric application in Infor OS, and in the Data Fabric app navigation menu, expand the Pipelines section.