But don’t know what it means?
In any data science pipeline there are a number of options that are selected for data processing (e.g. contrast settings for medical images, data imputation approaches, bandpass filter cut-offs for ecg signals). Typically, these options are selected manually based on previous experience of the Data Scientist or recommendations from previous, similar studies. In contingent AI, any “settable” parameter for data processing, data integration, or feature selection is permuted and the corresponding effects on the downstream predictive model measured. This process is similar to hyperparameter tuning in machine learning, however instead of optimizing only the machine learning model, the entire data science pipeline (including model selection) is subject to optimization.