Machine Learning as a Service: How to deploy ML Models as APIs without going nuts
Deploying a machine learning model as an API is not a trivial task. Requires understanding of some web framework, worrying about data validation, authentication and deploying etc. Even after the API is ready, figuring out how to use it in the client is an added pain. This talk takes the participants though the journey of building a machine learning model and deploying it as API.
Tags: Machine Learning, Python, Web
Scheduled on thursday 10:30 in room lounge
Speaker
Anand has been crafting beautiful software since a decade and half. He’s now building a data science platform, rorodata, which he recently co-founded. He regularly conducts advanced programming courses through Pipal Academy. He is co-author of web.py, a micro web framework in Python. He has worked at Strand Life Sciences and Internet Archive.
Description
Often, the most convenient way to deploy a machine model is an API. It allows accessing it from various programming environments and also decouples the development and deployment of the models from its use.
However, building an good API is hard. It involves many nitty-gritties and many of them need to repeated everytime an API is built. Also, it is very important to have a client library so that the API can be easily accessed. If you every plan to use it from Javascript directly, then you need to worry about cross-origin-resource-sharing etc. All things add up and building APIs for machine very tedious.
In this talk demonstrates how deploying machine learning models an APIs can be made fun by using right programming abstractions. This presents couple of opensource libraries firefly and rorolite which are built for this very purpose.