Pipeline monitoring concept
To understand pipelines' performance, we recommend that you regularly monitor your pipelines and replay queued records from the Replay Queue. Knowing the pipelines' behavior can help in case of, for example, errors and delays in data deliveries. We also recommend that you implement the logging, monitoring, and alerting capabilities in your destination systems. Currently, pipelines and the Replay Queue do not have alerting mechanisms.
Stream Pipelines monitoring graphs show live data and initial load activities.
- Select .
- Select the pipeline and click the Overview tab.
- Click the Live Data or Initial Load tab in the widget.
Live Data monitoring relates to events that are published from source systems to Data Fabric. Initial Load monitoring relates to events that are extracted from Data Lake when you run the pipeline's Initial Load.
These metrics and graphs are available in Stream Pipelines:
- Processed events
Events that have been ingested into a pipeline and processed by the pipeline. Processed events can be delivered, excluded or can result in errors.
- Delivered events
Events that have been inserted or updated successfully in the destination. Also, events that have been ignored by the destination because a higher variation of the same record already exists in the table.
Note: A delay between the processing and delivery of events can indicate potential bottlenecks in the data loading process at the destination. - Excluded events
Events that have been excluded from the delivery through the Upsert loading method. See Stream Pipelines concepts.
- Errors
Events that have been rejected and failed to be loaded by the destination for various reasons. Those events are temporarily stored in the Replay Queue.
- Replayed events
Events that have been replayed from the Replay Queue. Those events can be successfully delivered, excluded, or result in errors again.