Model interpretability for models trained with a custom algorithm
Machine learning models can make good predictions, but it becomes challenging to adopt predictions without understanding the logic behind them. Being able to interpret a model increases the trust in a machine learning model.
Extracting insights from the models and the predictions is valuable for identifying feature importance, informing feature engineering, directing future data collection, informing human decision making, and debugging, on top of building trust in the output.
The insights can provide answers to questions like:
- What features in the data did the model identify as the most important?
- For any single prediction from a model, how did each feature in the data affect that particular prediction?
- How does each feature affect the model's predictions in a big-picture sense?
In order to help answer these questions, there is a model interpretability option available for models trained with custom algorithms. The model insights file is entirely dependent on the custom algorithm code; you need to include the desired code within the custom algorithm you will apply in the training to generate the file and define its contents.
- Prepare and deploy the custom algorithm in a way that it includes code to define the insights file.
- Select a quest.
- Drag and drop the Train Model with Custom Algorithm activity box to the canvas.
- Select the custom algorithm from the list.
- Click .
- Click .
- Click the second output port of the Train Model with Custom Algorithm activity and download the output file with the model insights. Depending on the custom algorithm code, corresponding contents of the outcome are presented.