Changelog
[1.7.1] - 2024-03-07
Fixed
- Precision of floating point numbers could be lost when transferring data from Python
to Java in some situations.
[1.7.0] - 2024-02-27
Added
- Unstructured deployment predictions can now automatically convert to/from pandas
DataFrame if a DataFrame is passed as request data.
[1.6.3] - 2024-02-06
Fixed
- Deployment predictions could return the index column from the input dataframe as a column
named
Unnamed: 0
.
[1.6.2] - 2024-02-05
Fixed
- Version 1.6.0, 1.6.1 failed during import on Python 3.10 and later.
[1.6.1] - 2024-02-02
Changed
- Changed som internals used during testing.
- Removed parameters used during testing from
ScoringCodeModel
.
[1.6.0] - 2023-12-19
Added
- The libray version is now available as
datarobot_predict.__version__
.
- The Py4J gateway in
ScoringCodeModel
can now be shut down either by using
ScoringCodeModel
as a context manager or by manually calling shutdown()
.
Fixed
- Py4J exceptions raised from Java Scoring Code could be queried for information after
the Py4J gateway had been shut down.
[1.5.2] - 2023-11-24
Changed
- Internal changes that makes it easier to override ScoringCodeModel to customize behavior.
[1.5.1] - 2023-10-20
Fixed
- Scoring Code on Spark would fail on Spark 3.5.
[1.5.0] - 2023-10-05
Changed
- Deployment predict functions now returns PredictionResult instead of just pd.DataFrame.
[1.4.0] - 2023-09-25
Added
- Deployment predictions have been added.
Changed
- Moved
TimeSeriesType
enum to datarobot_predict
root module.
[1.3.3] - 2023-09-06
Fixed
- Scoring could fail on Windows because of text encoding issues.
[1.3.2] - 2023-09-06
Fixed
- Time series scoring of a small valid series could be skipped if the number of rows are less than the feature derivation window.
[1.3.1] - 2023-06-21
Fixed
- Line ending for streaming pandas DataFrame to Java on Windows.
[1.3.0] - 2023-06-19
Added
- Scoring Code PySpark API is feature complete.
Changed
- The library is now tested on Python 3.7 in CI jobs to make sure that it is supported.
- Py4J dependency relaxed to >=0.10.7, <1.0
Fixed
- Spark scoring would fail on some versions of Spark with
AttributeError: 'SparkSession' object has no attribute '_conf'
.
- Instantiation of
SparkScoringCodeModel
and Spark scoring would fail on Spark 2.x.
- Py4J detection failed on Databricks in some situations.
[1.2.0] - 2023-05-31
Added
- Partial implementation of Scoring Code PySpark API.
Fixed
- The library would fail to run on Python 3.7 because
cached_property
was being used.
[1.1.0] - 2023-04-28
Changed
- Changed Scoring Code backend to improve performance. Scoring is now performed in batches,
utilizing multiple threads.
- The underlying Java to Python bridge is changed from JPype to Py4J. This should
provide better stability as the JVM is running in an external process.
Fixed
- Scoring of Cross-Series Time Series models would fail.
[1.0.3] - 2023-04-12
Changed
- License changed to Apache 2.
[1.0.2] - 2023-04-12
Changed
- The version required for the dependency
click
was relaxed to >=7,<9
.
[1.0.1] - 2023-02-03
Fixed
- Internal CI job didn't work properly.
[1.0.0] - 2023-02-01
Added