Skip to content

API Reference

datarobot_predict.deployment

PredictionResult

Bases: NamedTuple

Predicion result type.

dataframe instance-attribute

dataframe: pd.DataFrame

Result dataframe.

response_headers instance-attribute

response_headers: CaseInsensitiveDict

Http response headers.

UnstructuredPredictionResult

Bases: NamedTuple

Unstructured prediction result type.

data instance-attribute

data: Union[bytes, pd.DataFrame]

Raw response or DataFrame.

response_headers instance-attribute

response_headers: CaseInsensitiveDict

Http response headers.

predict

predict(deployment, data_frame, max_explanations=0, max_ngram_explanations=None, threshold_high=None, threshold_low=None, time_series_type=TimeSeriesType.FORECAST, forecast_point=None, predictions_start_date=None, predictions_end_date=None, passthrough_columns=None, explanation_algorithm=None, prediction_endpoint=None, timeout=600)

Get predictions using the DataRobot Prediction API.

Parameters:

Name Type Description Default
deployment Union[dr.Deployment, str, None]

DataRobot deployment to use when computing predictions. Deployment can also be specified by deployment id or omitted which is used when prediction_endpoint is set, e.g. when using Portable Prediction Server.

If dr.Deployment, the prediction server and deployment id will be taken from the deployment. If str, the argument is expected to be the deployment id. If None, no deployment id is used. This can be used for Portable Prediction Server single-model mode.

required
data_frame pd.DataFrame

Input data.

required
max_explanations Union[int, str]

Number of prediction explanations to compute. If 0 and 'explanation_algorithm' is set to 'xemp' (default), prediction explanations are disabled. If 0 and 'explanation_algorithm' is set to 'shap', all explanations will be computed. If "all", all explanations will be computed. This is only available for SHAP.

0
max_ngram_explanations Optional[Union[int, str]]

The maximum number of text prediction explanations to supply per row of the dataset. The recommended max_ngram_explanations is all and by default is set to None.

None
threshold_high Optional[float]

Only compute prediction explanations for predictions above this threshold. If None, the default value will be used.

None
threshold_low Optional[float]

Only compute prediction explanations for predictions below this threshold. If None, the default value will be used.

None
time_series_type TimeSeriesType

Type of time series predictions to compute. If TimeSeriesType.FORECAST, predictions will be computed for a single forecast point specified by forecast_point. If TimeSeriesType.HISTORICAL, predictions will be computed for the range of timestamps specified by predictions_start_date and predictions_end_date.

TimeSeriesType.FORECAST
forecast_point Optional[datetime.datetime]

Forecast point to use for time series forecast point predictions. If None, the forecast point is detected automatically. If not None and time_series_type is not TimeSeriesType.FORECAST, ValueError is raised

None
predictions_start_date Optional[datetime.datetime]

Start date in range for historical predictions. Inclusive. If None, predictions will start from the earliest date in the input that has enough history. If not None and time_series_type is not TimeSeriesType.HISTORICAL, ValueError is raised

None
predictions_end_date Optional[datetime.datetime]

End date in range for historical predictions. Exclusive. If None, predictions will end on the last date in the input. If not None and time_series_type is not TimeSeriesType.HISTORICAL, ValueError is raised

None
passthrough_columns Union[str, Set[str], None]

Columns from the input dataframe to include in with the output. If 'all', all input columns will be included. If None, no columns will be included.

None
explanation_algorithm Optional[str]

Which algorithm will be used to calculate prediction explanations. If None, the default value will be used. Note: if 'max_explanations' is set to 0 or is missing, the response will contain - ALL explanation columns, when 'explanation_algorithm' is 'shap'; - NO explanation columns, when 'explanation_algorithm' is 'xemp'.

None
prediction_endpoint Optional[str]

Specific prediction endpoint to use. This overrides any prediction server found in deployment. If None, prediction endpoint found in deployment will be used.

None
timeout int

Request timeout in seconds.

600

Returns:

Type Description
PredictionResult

Prediction result consisting of a dataframe and response headers.

predict_unstructured

predict_unstructured(deployment, data, content_type=None, accept=None, timeout=600)

Get predictions for an unstructured model deployment.

Parameters:

Name Type Description Default
deployment dr.Deployment

Deployment used to compute predictions.

required
data Any

Data to send to the endpoint. This can be text, bytes or a file-like object. Anything that the python requests library accepts as data can be used. If pandas.DataFrame, it will be converted to csv and the response will also be converted to DataFrame if the response content-type is text/csv.

required
content_type Optional[str]

The content type for the data. If None, content type will be inferred from data.

None
accept Optional[str]

The mimetypes supported for the return value. If None, any mimetype is supported.

None
timeout int

Request timeout in seconds.

600

Returns:

Type Description
UnstructuredPredictionResult

Prediction result consisting of raw response content and response headers.

datarobot_predict._base_scoring_code

BaseScoringCodeModel

Bases: ABC

class_labels property

class_labels: Optional[Sequence[str]]

Get the class labels for the model.

Returns:

Type Description
Optional[Sequence[str]]

List of class labels if model is a classification model, else None.

date_column property

date_column: Optional[str]

Get the date column for a Time Series model.

Returns:

Type Description
Optional[str]

Name of date column if model has one, else None.

date_format property

date_format: Optional[str]

Get the date format for a Time Series model.

Returns:

Type Description
Optional[str]

Date format having the syntax expected by datetime.strftime() or None if model is not time series.

feature_derivation_window property

feature_derivation_window: Optional[Tuple[int, int]]

Get the feature derivation window for a Time Series model.

Returns:

Type Description
Optional[Tuple[int, int]]

Feature derivation window as (begin, end) if model has this, else None.

features property

features: Dict[str, type]

Get features names and types for the model.

Returns:

Type Description
OrderedDict[str, type]

Dictionary mapping feature name to feature type, where feature type is either str or float. The ordering of features is the same as it was during model training.

forecast_window property

forecast_window: Optional[Tuple[int, int]]

Get the forecast window for a Time Series model.

Returns:

Type Description
Optional[Tuple[int, int]]

Forecast window as (begin, end) if model has this, else None.

model_id property

model_id: str

Get the model id.

Returns:

Type Description
str

The model id.

model_info property

model_info: Optional[Dict[str, str]]

Get model metadata.

Returns:

Type Description
Optional[Dict[str, str]]

Dictionary with metadata if model has any, else None

model_type property

model_type: ModelType

Get the model type.

Returns:

Type Description
ModelType

One of: ModelType.CLASSIFICATION, ModelType.REGRESSION, ModelType.TIME_SERIES

series_id_column property

series_id_column: Optional[str]

Get the name of the series id column for a Time Series model.

Returns:

Type Description
Optional[str]

Name of the series id column if model has one, else None.

time_step property

time_step: Optional[Tuple[int, str]]

Get the time step for a Time Series model.

Returns:

Type Description
Optional[Tuple[int, str]]

Time step as (quantity, time unit) if model has this, else None. Example: (3, "DAYS")

ModelType

Bases: enum.Enum

CLASSIFICATION instance-attribute class-attribute

CLASSIFICATION = 'IClassificationPredictor'

Classification predictor

GENERIC instance-attribute class-attribute

GENERIC = 'GenericPredictorImpl'

Generic predictor. Used for testing purposes.

REGRESSION instance-attribute class-attribute

REGRESSION = 'IRegressionPredictor'

Regression predictor

TIME_SERIES instance-attribute class-attribute

TIME_SERIES = 'ITimeSeriesRegressionPredictor'

Time Series predictor

datarobot_predict.scoring_code

ScoringCodeModel

Bases: BaseScoringCodeModel

__init__

__init__(jar_path)

Constructor for ScoringCodeModel

Parameters:

Name Type Description Default
jar_path str

path to a jar file

required

predict

predict(data_frame, max_explanations=0, threshold_high=None, threshold_low=None, time_series_type=TimeSeriesType.FORECAST, forecast_point=None, predictions_start_date=None, predictions_end_date=None, prediction_intervals_length=None, passthrough_columns=None)

Get predictions from Scoring Code model.

Parameters:

Name Type Description Default
data_frame pd.DataFrame

Input data.

required
max_explanations int

Number of prediction explanations to compute. If 0, prediction explanations are disabled.

0
threshold_high Optional[float]

Only compute prediction explanations for predictions above this threshold. If None, the default value will be used.

None
threshold_low Optional[float]

Only compute prediction explanations for predictions below this threshold. If None, the default value will be used.

None
time_series_type TimeSeriesType

Type of time series predictions to compute. If TimeSeriesType.FORECAST, predictions will be computed for a single forecast point specified by forecast_point. If TimeSeriesType.HISTORICAL, predictions will be computed for the range of timestamps specified by predictions_start_date and predictions_end_date.

TimeSeriesType.FORECAST
forecast_point Optional[datetime.datetime]

Forecast point to use for time series forecast point predictions. If None, the forecast point is detected automatically. If not None and time_series_type is not TimeSeriesType.FORECAST, ValueError is raised

None
predictions_start_date Optional[datetime.datetime]

Start date in range for historical predictions. Inclusive. If None, predictions will start from the earliest date in the input that has enough history. If not None and time_series_type is not TimeSeriesType.HISTORICAL, ValueError is raised

None
predictions_end_date Optional[datetime.datetime]

End date in range for historical predictions. Exclusive. If None, predictions will end on the last date in the input. If not None and time_series_type is not TimeSeriesType.HISTORICAL, ValueError is raised

None
prediction_intervals_length Optional[int]

The percentile to use for the size for prediction intervals. Has to be an integer between 0 and 100(inclusive). If None, prediction intervals will not be computed.

None
passthrough_columns Union[str, Set[str], None]

Columns from the input dataframe to include in with the output. If 'all', all input columns will be included. If None, no columns will be included.

None

Returns:

Type Description
pd.DataFrame

Prediction output.

cli

cli(model, input_csv, output_csv, forecast_point, predictions_start_date, predictions_end_date, with_explanations, prediction_intervals_length)

Command Line Interface main function.

Parameters:

Name Type Description Default
model str
required
input_csv TextIOWrapper
required
output_csv TextIOWrapper
required
forecast_point Optional[str]
required
predictions_start_date Optional[str]
required
predictions_end_date Optional[str]
required
with_explanations bool
required
prediction_intervals_length int
required

datarobot_predict.spark_scoring_code

SparkScoringCodeModel

Bases: BaseScoringCodeModel

__init__

__init__(jar_path=None, allow_models_in_classpath=False)

Create a new instance of SparkScoringCodeModel

Parameters:

Name Type Description Default
jar_path Optional[str]

The path to a Scoring Code jar file to load. If None, the Scoring Code jar will be loaded from the classpath

None

allow_models_in_classpath: bool Having models in the classpath while loading a model from the filesystem using the jar_path argument can lead to unexpected behavior so this is not allowed by default but can be forced using allow_models_in_classpath. If True, models already present in the classpath will be ignored. If False, a ValueError will be raised if models are detected in the classpath.

predict

predict(data_frame, time_series_type=TimeSeriesType.FORECAST, forecast_point=None, predictions_start_date=None, predictions_end_date=None)

Get predictions from the Scoring Code Spark model.

Parameters:

Name Type Description Default
data_frame Union[DataFrame, pd.DataFrame]

Input data.

required
time_series_type TimeSeriesType

Type of time series predictions to compute. If TimeSeriesType.FORECAST, predictions will be computed for a single forecast point specified by forecast_point. If TimeSeriesType.HISTORICAL, predictions will be computed for the range of timestamps specified by predictions_start_date and predictions_end_date.

TimeSeriesType.FORECAST
forecast_point Optional[datetime.datetime]

Forecast point to use for time series forecast point predictions. If None, the forecast point is detected automatically. If not None and time_series_type is not TimeSeriesType.FORECAST, ValueError is raised

None
predictions_start_date Optional[datetime.datetime]

Start date in range for historical predictions. Inclusive. If None, predictions will start from the earliest date in the input that has enough history. If not None and time_series_type is not TimeSeriesType.HISTORICAL, ValueError is raised

None
predictions_end_date Optional[datetime.datetime]

End date in range for historical predictions. Exclusive. If None, predictions will end on the last date in the input. If not None and time_series_type is not TimeSeriesType.HISTORICAL, ValueError is raised

None

Returns:

Type Description
pyspark.sql.DataFrame

Prediction output.