Batch Ingestion API
This service enables applications to write data objects directly to storage. It allows applications to produce larger payload sizes that can otherwise exceed throughput limitations in ION.
When using the API, you must include the name of the data object and the logical ID of the by the source application
For statistics or troubleshooting, we recommend that you include this information:
- Decompressed size of the file.
- Instance count: The number of records in the file.
- Source publication date: The date and time when the data object was published by the source application.
For sending archived data to Data Lake, you must ensure that the archived property is set to true. All records included in an archived object must be archived too.
This table shows the Batch Ingestion API with the quota policy implemented per tenant:
Method | Calls |
---|---|
POST/dataobjects | Up to 200 per minute |
For details how to use the API, see the Swagger documentation for the endpoint in the API Gateway suite for Data Fabric.
For information on the source publication date (date and time of publishing the data object), how to use API Gateway, and how to interact with Swagger documentation for the API methods, see the ION documentation.