Thursday October 28 2:00 PM – Thursday October 28 2:30 PM in Talks I

AIQC; deep learning experiment tracking with multi-dimensional pre/post-processing.

Layne Sadler

Prior knowledge:
No previous knowledge expected

Summary

AIQC began as framework for deep learning experiment tracking to accelerate open science, but it turns out tracking is the easy part. In this talk, we'll explore how MLOps is really about data pre/post-processing. E.g. how use a validation split w heterogenous, multi-dimensional data on a sliding window that has been 10xfolded with 4 encoders, and decode predictions 3months later? AIQC does that.

Description

Audience

AIQC was initially designed as a high level API for scientists to make deep learning accessible, but over time it was expanded to meet the needs of expert university practitioners - so everyone should be able to get value from this presentation. The problems we'll discuss are also boiled down to their simplest form.

Problem Space Theory + Solution Demo.

We'll explore how to solve the following chronic problems, which are hardcoded into machine learning toolsets, with a live demo of AIQC for preprocessing, experiment tracking, and post-processing:

  • Data Leakage; when aggregate information about test/ holdout data is used to process training samples. Most encoders are not handling each split/ fold individually, so information about the test data “leaks” into the transformation of the training data itself.

  • Evaluation Bias; when a user makes changes to their topology/ parameters based on how the model performs against the test/ holdout data. Most programs are not using a 3rd validation split, so they are effectively training on their entire dataset when they make adjustments.

  • Partial Reproducibility; it is common for most experiment trackers to ignore the sample splits/ folds as generic inputs (e.g. X_train, y_train) to the training process. Additionally, although preprocessing can be just as important as hyperparameters (e.g. PowerTransformer vs StandardScaler), most experiment trackers are blind to how the samples were processed.

  • Multidimensional Data; the standard ML toolset only handles either 1D or 2D data. This can’t be applied to images and sliding time series data.

These problems are symptoms of the fact that machine learning workflows are emergent oral histories that are scattered in tutorial cookbooks rather than well-defined protocols. On one hand, these obscure processes have become second nature for experienced data scientists. On the other hand, when contrasted with rigorous, scientific quality control (QC) pipelines, it is easy to see how scientists have come to view machine learning as a black box.