Data flows

The most important items in ION Connect are the data flows.

A data flow is a sequence of activities that send or receive documents. Data flows are event-driven. When a document is published by an application, the next step in the flow is triggered. Each flow starts and ends with one or more connection points. Connection points can be reused in multiple data flows, and the same connection point can be used multiple times in a data flow.

Usually, a data flow represents a business process. For example:

  • An invoicing process where an invoice is created for a delivery that was shipped.
  • A maintenance process where a service order is planned based on a customer repair request.

In a simple data flow, two connection points are involved. One connection point sends a specific type of documents, and the other connection point receives those documents. Data flows can also be more complex.

In addition to connection points, data flows can contain these items:

  • Mappings are used to translate a document to another format. For example, if you retrieve custom sales order documents from your database, you must translate them to standard Infor BODs to load them into an Infor application.
  • Scripts are used for executing custom Python code on incoming documents. For example, scripts can be used to facilitate and solve common use cases such as data object format conversion, data mapping, complex calculations and transformations.
  • Parallel flows are used if a document must be sent to multiple connection points. Or if documents from multiple connection points must be sent to a single connection point.
  • Filters are used to limit the number of messages that are delivered to a connection point. For example, if only documents having a specific status are relevant for an application, you can filter out the documents having another status.
  • Content-based routings are used to send a document to a subset of available destinations based on the document contents. For example, if three warehouses exist, each having their own warehouse management system, a SalesOrder message can be routed to one of these warehouses. The warehouse code that is specified in the SalesOrder message is used. The execution of data flows is fully automatic. After a flow is defined and activated, the relevant documents are routed accordingly.
  • Data Lake activities to retrieve or store data in Data Lake.

Validation rules for data flows

These validation rules apply to data flows:

  • Query and Retrieve Data Lake must be at the start of the data flow.
  • Ingest Data Lake must be at the end of the data flow.
  • A Merge cannot contain Query, Retrieve, or Ingest Data Lake.
  • You cannot place a Data Lake Ingest immediately after a Filter, Wait, or Routing.
  • You cannot place a Data Lake Ingest after a Splitter passthrough.

A data flow is a sequence of activities that results in sending data into Data Lake or a sequence of activities starting with retrieval of data from Data Lake.

In a "simple" data flow, two connection points are involved. One connection point sends a specific type of documents, and the other connection point receives those documents. Data flows can also be more complex.

Some activities are not allowed to be modeled in data flows..

See Creating and using data flows.

Asynchronous activities are not allowed to be modeled in the middle of a data flow. Specifically these connection points:

  • Application (IMS)
  • Application (in-box/outbox)
  • LN
  • CRM Business Extension
  • File
  • API (Send, Read)
  • Database (Read, Send, Request/Reply)
  • Message Queue (JMS)

In addition to connection points, a data flow can contain other items such as these:

  • Mappings that are used to translate a document to another format.
  • Scripts that are used for executing custom Python code with a document input.
  • Parallel flows that are used when sending documents from multiple connection points to Data Lake.