In any data science pipeline there are a number of options that are selected for data processing (e.g. contrast settings for medical images, data imputation approaches, bandpass filter cut-offs for ecg signals). Typically, these options are selected manually based on previous experience of the Data Scientist or recommendations from previous, similar studies. In contingent AI, any “settable” parameter for data processing, data integration, or feature selection is permuted and the corresponding effects on the downstream predictive model measured. This process is similar to hyperparameter tuning in machine learning, however instead of optimizing only the machine learning model, the entire data science pipeline (including model selection) is subject to optimization.
- Dishing Dirt About Clean Data June 20, 2019
- Case Study: Lead Compound Generation using Augusta™ June 20, 2019
- Biocompare Article from Bio2019 June 10, 2019
- Bio International Convention – Media Advisory June 3, 2019
- Meet us at Bio 2019 | Take a Walking Tour of Philadelphia May 28, 2019
- Case Study: Combining Data Improves Accuracy in the Diagnosis of Alzheimer’s Disease May 28, 2019
- MEDIA ADVISORY: BioSymetrics to Demonstrate Usage of Artificial Intelligence for Healthcare and Biomedical R&D Innovation at Collision 2019 May 22, 2019