'Contingent AI', What is it?
What is Contingent AI?
In any data science pipeline there are a number of options that are selected for data processing (e.g. contrast settings for medical images, data imputation approaches, bandpass filter cut-offs for ecg signals). Typically, these options are selected manually based on previous experience of the Data Scientist or recommendations from previous, similar studies. In contingent AI, any “settable” parameter for data processing, data integration, or feature selection is permuted and the corresponding effects on the downstream predictive model measured. This process is similar to hyperparameter tuning in machine learning, however instead of optimizing only the machine learning model, the entire data science pipeline (including model selection) is subject to optimization.
Essentially, BioSymetrics AI can iterate upon itself, both recognizing and validating various machine learning models “contingent” on data inputs.
Why is an iterative AI important?
BioSymetrics work has shown that processing parameters can have a greater impact on predictive models than choice of machine learning algorithm. Just as machine learning models are “tuned” to suite specific datasets or target characteristics, we suggest that processing pipelines are similarly optimized for specific applications. This not only produces more effective models, but also leads to automation of what is often the most time-consuming part of any data science workflow.