mlrl.common.rule_learners module¶
Author: Michael Rapp (michael.rapp.ml@gmail.com)
Provides base classes for implementing single- or multi-label rule learning algorithms.
- class mlrl.common.rule_learners.RuleLearner(random_state: int | None, feature_format: str | None, label_format: str | None, prediction_format: str | None)¶
Bases:
Learner
,NominalAttributeLearner
,OrdinalAttributeLearner
,IncrementalLearner
,ABC
A scikit-learn implementation of a rule learning algorithm for multi-label classification or ranking.
- class IncrementalPredictor(feature_matrix: RowWiseFeatureMatrix, model: RuleModel, max_rules: int, predictor)¶
Bases:
IncrementalPredictor
Allows to obtain predictions from a RuleLearner incrementally.
- apply_next(step_size: int)¶
See
mlrl.common.learners.IncrementalLearner.IncrementalPredictor.apply_next()
- get_num_next() int ¶
See
mlrl.common.learners.IncrementalLearner.IncrementalPredictor.get_num_next()
- class IncrementalProbabilityPredictor(feature_matrix: RowWiseFeatureMatrix, model: RuleModel, max_rules: int, predictor)¶
Bases:
IncrementalPredictor
Allows to obtain probability estimates from a RuleLearner incrementally.
- apply_next(step_size: int)¶
See
mlrl.common.learners.IncrementalLearner.IncrementalPredictor.apply_next()
- class NativeIncrementalPredictor(feature_matrix: RowWiseFeatureMatrix, incremental_predictor)¶
Bases:
IncrementalPredictor
Allows to obtain predictions from a RuleLearner incrementally by using its native support of this functionality.
- apply_next(step_size: int)¶
See
mlrl.common.learners.IncrementalLearner.IncrementalPredictor.apply_next()
- get_num_next() int ¶
See
mlrl.common.learners.IncrementalLearner.IncrementalPredictor.get_num_next()
- class NativeIncrementalProbabilityPredictor(feature_matrix: RowWiseFeatureMatrix, incremental_predictor)¶
Bases:
NativeIncrementalPredictor
Allows to obtain probability estimates from a RuleLearner incrementally by using its native support of this functionality.
- apply_next(step_size: int)¶
See
mlrl.common.learners.IncrementalLearner.IncrementalPredictor.apply_next()
- set_fit_request(*, x: bool | None | str = '$UNCHANGED$') RuleLearner ¶
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.Parameters¶
- xstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
x
parameter infit
.
Returns¶
- selfobject
The updated object.
- set_predict_proba_request(*, x: bool | None | str = '$UNCHANGED$') RuleLearner ¶
Request metadata passed to the
predict_proba
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict_proba
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict_proba
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.Parameters¶
- xstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
x
parameter inpredict_proba
.
Returns¶
- selfobject
The updated object.
- set_predict_request(*, x: bool | None | str = '$UNCHANGED$') RuleLearner ¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.Parameters¶
- xstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
x
parameter inpredict
.
Returns¶
- selfobject
The updated object.
- class mlrl.common.rule_learners.SparseFormat(value, names=None, *values, module=None, qualname=None, type=None, start=1, boundary=None)¶
Bases:
Enum
Specifies all valid textual representations of sparse matrix formats.
- CSC = 'csc'¶
- CSR = 'csr'¶
- class mlrl.common.rule_learners.SparsePolicy(value, names=None, *values, module=None, qualname=None, type=None, start=1, boundary=None)¶
Bases:
Enum
Specifies all valid textual representation of policies to be used for converting matrices into sparse or dense formats.
- AUTO = 'auto'¶
- FORCE_DENSE = 'dense'¶
- FORCE_SPARSE = 'sparse'¶
- mlrl.common.rule_learners.convert_into_sklearn_compatible_probabilities(probabilities: ndarray) ndarray ¶
Converts given probability estimates into a format that is compatible with scikit-learn.
- Parameters:
probabilities – A np.ndarray that stores probability estimates
- Returns:
A np.ndarray that is compatible with scikit-learn
- mlrl.common.rule_learners.create_binary_predictor(learner: RuleLearner, model: RuleModel, label_space_info: LabelSpaceInfo, marginal_probability_calibration_model: MarginalProbabilityCalibrationModel, joint_probability_calibration_model: JointProbabilityCalibrationModel, num_labels: int, feature_matrix: RowWiseFeatureMatrix, sparse: bool)¶
Creates and returns a predictor for predicting binary labels.
- Parameters:
learner – The learner for which the predictor should be created
model – The model to be used for prediction
label_space_info – Information about the label space that may be used for prediction
marginal_probability_calibration_model – A model for the calibration of marginal probabilities
joint_probability_calibration_model – A model for the calibration of joint probabilities
num_labels – The total number of labels to predict for
feature_matrix – A feature matrix that provides row-wise access to the features of the query examples
sparse – True, if a sparse matrix should be used for storing predictions, False otherwise
- Returns:
The predictor that has been created
- mlrl.common.rule_learners.create_probability_predictor(learner: RuleLearner, model: RuleModel, label_space_info: LabelSpaceInfo, marginal_probability_calibration_model: MarginalProbabilityCalibrationModel, joint_probability_calibration_model: JointProbabilityCalibrationModel, num_labels: int, feature_matrix: RowWiseFeatureMatrix)¶
Creates and returns a predictor for predicting probability estimates.
- Parameters:
learner – The learner for which the predictor should be created
model – The model to be used for prediction
label_space_info – Information about the label space that may be used for prediction
marginal_probability_calibration_model – A model for the calibration of marginal probabilities
joint_probability_calibration_model – A model for the calibration of joint probabilities
num_labels – The total number of labels to predict for
feature_matrix – A feature matrix that provides row-wise access to the features of the query examples
- Returns:
The predictor that has been created
- mlrl.common.rule_learners.create_score_predictor(learner: RuleLearner, model: RuleModel, label_space_info: LabelSpaceInfo, num_labels: int, feature_matrix: RowWiseFeatureMatrix)¶
Creates and returns a predictor for predicting regression scores.
- Parameters:
learner – The learner for which the predictor should be created
model – The model to be used for prediction
label_space_info – Information about the label space that may be used for prediction
num_labels – The total number of labels to predict for
feature_matrix – A feature matrix that provides row-wise access to the features of the query examples
- Returns:
The predictor that has been created
- mlrl.common.rule_learners.is_sparse(matrix, sparse_format: SparseFormat, dtype, sparse_values: bool = True) bool ¶
Returns whether a given matrix is considered sparse or not. A matrix is considered sparse if it is given in a sparse format and is expected to occupy less memory than a dense matrix.
- Parameters:
matrix – A np.ndarray or scipy.sparse.matrix to be checked
sparse_format – The SparseFormat to be used
dtype – The type of the values that should be stored in the matrix
sparse_values – True, if the values must explicitly be stored when using a sparse format, False otherwise
- Returns:
True, if the given matrix is considered sparse, False otherwise
- mlrl.common.rule_learners.parse_sparse_policy(parameter_name: str, value: str | None) SparsePolicy ¶
Parses and returns a parameter value that specifies a SparsePolicy to be used for converting matrices into sparse or dense formats. If the given value is invalid, a ValueError is raised.
- Parameters:
parameter_name – The name of the parameter
value – The value to be parsed or None, if the default value should be used
- Returns:
A SparsePolicy
- mlrl.common.rule_learners.should_enforce_sparse(matrix, sparse_format: SparseFormat, policy: SparsePolicy, dtype, sparse_values: bool = True) bool ¶
Returns whether it is preferable to convert a given matrix into a scipy.sparse.csr_matrix or scipy.sparse.csc_matrix, depending on the format of the given matrix and a given SparsePolicy:
If the given policy is SparsePolicy.AUTO, the matrix will be converted into the given sparse format, if possible and if the sparse matrix is expected to occupy less memory than a dense matrix. To be able to convert the matrix into a sparse format, it must be a scipy.sparse.lil_matrix, scipy.sparse.dok_matrix, scipy.sparse.coo_matrix, scipy.sparse.csr_matrix or scipy.sparse.csc_matrix.
If the given policy is SparsePolicy.FORCE_SPARSE, the matrix will always be converted into the specified sparse format, if possible.
If the given policy is SparsePolicy.FORCE_DENSE, the matrix will always be converted into a dense matrix.
- Parameters:
matrix – A np.ndarray or scipy.sparse.matrix to be checked
sparse_format – The SparseFormat to be used
policy – The SparsePolicy to be used
dtype – The type of the values that should be stored in the matrix
sparse_values – True, if the values must explicitly be stored when using a sparse format, False otherwise
- Returns:
True, if it is preferable to convert the matrix into a sparse matrix of the given format, False otherwise