deploy#
- datarobotx.deploy(model, *args, target=None, classes=None, name=None, description=None, hooks=None, extra_requirements=None, **kwargs)[source]#
Deploy a model to MLOps
If model object is a non-DR model, will build and configure an appropriate supporting custom environment and custom inference model within DataRobot as part of deployment.
- Parameters:
model (Any) – The model object to deploy. See the drx documentation for additional information on supported model types.
*args (Any, optional) – Additional model objects; required for certain model types, e.g. Huggingface tokenizer + pre-trained model
target (str, optional) – Name of the target variable; required for supervised model types
classes (list of str, optional) – Names of the target variable classes; required for supervised classification problems; for binary classification, the first item should be the positive class
name (str, optional) – Name of the ML Ops deployment
description (str, optional) – Short description for the ML Ops deployment
extra_requirements (list of str, optional) – For custom model deployments: additional python pypi package names to include in the custom environment. Default behavior is to include standard dependencies for the model type
hooks (dict of callable, optional) – For custom model deployments: additional hooks to include with the deployment; see the DataRobot User Models documentation for details on supported hooks
**kwargs – Additional keyword arguments that may be model specific
- Returns:
deployment – Resulting ML Ops deployment; returned immediately and automatically updated asynchronously as the deployment process proceeds
- Return type:
Examples
scikit-learn pipeline
>>> import sklearn >>> >>> pipe : sklearn.pipeline # assumes pipe has been defined & fit elsewhere >>> deployment_1 = deploy(pipe, ... target='my_target', ... classes=['my_pos_class', 'my_neg_class'])
scikit-learn pipeline with custom preprocessing hook
>>> import io >>> import pandas as pd >>> >>> df : pd.DataFrame # assumes training data was previously read elsewhere >>> my_types = df.dtypes >>> def force_schema(input_binary_data, *args, **kwargs): ... buffer = io.BytesIO(input_binary_data) ... return pd.read_csv(buffer, dtype=dict(my_types)) >>> >>> deployment_2 = deploy(pipe, ... target='my_target', ... classes=['my_pos_class', 'my_neg_class'], ... hooks={'read_input_data': force_schema})