Artificial Intelligence

The Artificial Intelligence service (Infor Coleman AI) is a complete ensemble of tools for creating, managing, securing, and deploying machine learning models and use cases for the enterprise.
  • Coleman AI

    Infor Coleman AI Platform is a machine-learning platform that uses programmed algorithms to produce a predictive model based on input data analysis. Machine-learning algorithms analyze input data to produce a model that predicts outcomes for a specific business case. As new data are processed through the algorithm, models learn to optimize their operations and improve performance, gradually providing more accurate predictions.

    Coleman AI predictive models learn from observations, identify patterns in data, and explore different options and possibilities. The predictive models provide forecasts and predict outcomes, future business tendencies, and behavior. The insights that models provide can advance further into optimization and automation of business processes.

  • Training time

    The machine-learning predictive model is produced in the training section of the machine-learning quest.

    Building the predictive model involves training, testing, and adjusting the model in an iterative process. This involves data preparation steps to clean and transform the raw data into a shape consumable by the machine-learning algorithm, processing the data through the algorithm to score and test the predictions and fine tune the model to achieve the best results. Post-processing steps can be applied to transform the results into a desired format.

    Each training quest run is metered as training time, from the execution start to the execution finish.

    The duration of the training time directly depends on the dataset size and the complexity of the model.

    When the model is ready, it can be moved forward to the production section of the quest for deployment that will allow model consumption. Model consumption in production can be either batch production or an API endpoint for real-time.

  • Batch time

    Running a batch production quest is commonly applied for use cases that work with larger datasets where the results are not expected real-time.

    Each batch quest run is metered as batch time, from the execution start to the execution finish.

    The duration of the quest execution directly depends on the dataset size and the complexity of the production quest flow.

  • Active endpoint

    The production model can be deployed as an active endpoint of a REST API service for use as a real-time production quest. Real-time production is used to specify the final flow of activities and deploy the model as a REST API for real-time inferences.