Productionizing your ML code seamlessly

Nowadays, it's easy to build a model and play with data in a notebook, but hard to bring the code to production. This talk will aim to answer: 1. What does running an ML model in production involve? 2. How to improve your development workflow to make the path to production easier?

Tags: Artificial Intelligence, Data Science, Machine Learning

Scheduled on wednesday 16:35 in room lounge

Speaker

Lauris Jullien

I always love playing with data and ML problems. Computer Vision at University, robotics later and now for Yelp!

Description

Data science and Machine Learning are hot topics right now for Software Engineers and beyond. There are a lot of python tools that allow you to hack together a notebook to quickly get insight on your data, or train a model to predict or classify. Or you might have inherited some data wrangling and modeling {Jupyter/Zeppelin} notebook code from someone else, like the resident data scientist.

The code works on test data, when you run the cells in the right order (skipping cell 22), and you believe that the insight gained from this work would be a valuable game changer. But now how do you take this experimental code into production, and keep it up-to-date with a regular retraining schedule? And what do you need to do after that, to ensure that it remains reliable and brings value in the long term?

These will be the questions this talk will answer, focusing on 2 main themes: What does running an ML model in production involve? How to improve your development workflow to make the path to production easier?

This talk will draw examples from real projects at Yelp, like migrating a pandas/sklearn classification project into production with pyspark, while aiming to give advice that is not dependent on specific frameworks, or tools, and is useful for listeners from all backgrounds.