Operationalizing data is the hardest part of deploying machine learning to production. Most common practice today is for teams to build bespoke pipelines to process model-specific features and serve them in production. In this session, Snowflake and Tecton will present their tightly integrated solution that allows teams to quickly build production-ready features to serve machine learning models.
Production ML pipelines are different from traditional analytics pipelines. They need to process both historical data for training, and fresh data for online serving. They must ensure training/serving parity, and provide point-in-time correctness. Features must be served online at high scale and low latency to support production workloads. These challenges are difficult to tackle with traditional data orchestration tools, and can often add weeks or months to the delivery time of new ML projects.
Together, Tecton and Snowflake have developed a tightly integrated solution to solve these challenges. The Snowflake Data Cloud provides a central repository of refined analytical data and highly scalable processing resources. The Tecton feature store integrates tightly with Snowflake to provide an operational bridge from Snowflake to ML models. It allows teams to define new features as code, automate the processing of feature values, store historical data for training directly in Snowflake, and serve features online with production-grade service levels. Together, Snowflake and Tecton are the simplest and fastest path to building and serving production-grade features.