SPIP to improve online serving of Spark MLLib Models
I have filed a JIRA ticket
an SPIP on improving the model load latency and serving interfaces for MLLib
online serving, as discussed with Joseph Bradley and with Felix Cheung as
the SPIP Shepherd.
The associated SPIP doc is linked from the ticket.
We have been using the proposed code improvements to do online serving of
Spark MLLib models
in production at Uber, and have found the combination of Spark standard
representation with efficient loading and online serving to work well for