Overview
Request 1124107 accepted
- update to 1.3.2:
* All dataset fetchers now accept `data_home` as any object that
implements the :class:`os.PathLike` interface, for instance,
:class:`pathlib.Path`.
* Fixes a bug in :class:`decomposition.KernelPCA` by forcing the
output of the internal :class:`preprocessing.KernelCenterer` to
be a default array. When the arpack solver is used, it expects
an array with a `dtype` attribute.
* Fixes a bug for metrics using `zero_division=np.nan`
(e.g. :func:`~metrics.precision_score`) within a paralell loop
(e.g. :func:`~model_selection.cross_val_score`) where the
singleton for `np.nan` will be different in the sub-processes.
* Do not leak data via non-initialized memory in decision tree
pickle files and make the generation of those files
deterministic.
* Ridge models with `solver='sparse_cg'` may have slightly
different results with scipy>=1.12, because of an underlying
change in the scipy solver
* The `set_output` API correctly works with list input.
* :class:`calibration.CalibratedClassifierCV` can now handle
models that produce large prediction scores.
- Skip another recalcitrant test on 32 bit.
* We are in the process of introducing a new way to route metadata
such as sample_weight throughout the codebase, which would
affect how meta-estimators such as pipeline.Pipeline and
* Originally hosted in the scikit-learn-contrib repository,
* A new category encoding strategy preprocessing.TargetEncoder
encodes the categories based on a shrunk estimate of the average
* The classes tree.DecisionTreeClassifier and tree.DecisionTreeRegressor
- Created by dirkmueller
- In state accepted
Request History
dirkmueller created request
- update to 1.3.2:
* All dataset fetchers now accept `data_home` as any object that
implements the :class:`os.PathLike` interface, for instance,
:class:`pathlib.Path`.
* Fixes a bug in :class:`decomposition.KernelPCA` by forcing the
output of the internal :class:`preprocessing.KernelCenterer` to
be a default array. When the arpack solver is used, it expects
an array with a `dtype` attribute.
* Fixes a bug for metrics using `zero_division=np.nan`
(e.g. :func:`~metrics.precision_score`) within a paralell loop
(e.g. :func:`~model_selection.cross_val_score`) where the
singleton for `np.nan` will be different in the sub-processes.
* Do not leak data via non-initialized memory in decision tree
pickle files and make the generation of those files
deterministic.
* Ridge models with `solver='sparse_cg'` may have slightly
different results with scipy>=1.12, because of an underlying
change in the scipy solver
* The `set_output` API correctly works with list input.
* :class:`calibration.CalibratedClassifierCV` can now handle
models that produce large prediction scores.
- Skip another recalcitrant test on 32 bit.
* We are in the process of introducing a new way to route metadata
such as sample_weight throughout the codebase, which would
affect how meta-estimators such as pipeline.Pipeline and
* Originally hosted in the scikit-learn-contrib repository,
* A new category encoding strategy preprocessing.TargetEncoder
encodes the categories based on a shrunk estimate of the average
* The classes tree.DecisionTreeClassifier and tree.DecisionTreeRegressor
anag+factory set openSUSE:Factory:Staging:E as a staging project
Being evaluated by staging project "openSUSE:Factory:Staging:E"
anag+factory accepted review
Picked "openSUSE:Factory:Staging:E"
factory-auto added opensuse-review-team as a reviewer
Please review sources
factory-auto accepted review
Check script succeeded
licensedigger accepted review
The legal review is accepted preliminary. The package may require actions later on.
dimstar accepted review
anag+factory accepted review
Staging Project openSUSE:Factory:Staging:E got accepted.
anag+factory approved review
Staging Project openSUSE:Factory:Staging:E got accepted.
anag+factory accepted request
Staging Project openSUSE:Factory:Staging:E got accepted.