ibm_aigov_facts_client.utils.support_scope_meta module
- class Prop(name=None, framework_name=None, autolog=None, manual_log=None, version=None, trigger_methods=None, training_metrics=None, post_training_metrics=None, parameters=None, tags=None, artifacts=None, search_estimator=None)
Bases:
object
- class FrameworkSupportOptions
Bases:
SupportBaseTo view supported frameworks , use:
>>> client.FrameworkSupportNames.show()
To see only supported framework names , use:
>>> client.FrameworkSupportNames.get()
Set of Supported Frameworks for Auto logging. Manual log is not version dependent.
Available Options:
Name
Framework Name
Autolog
Manual Log
Version
scikit
sklearn
Y
Y
0.22.1 <= scikit-learn <= 1.1.2
Tensorflow
tensorflow
Y
Y
2.3.0 <= tensorflow <= 2.9.1
Keras
keras
Y
Y
2.3.0 <= keras <= 2.9.0
PySpark
pyspark
Y
Y
3.0.0 <= pyspark <= 3.4.0
Xgboost
xgboost
Y
Y
1.1.1 <= xgboost <= 1.6.1
LightGBM
lightgbm
Y
Y
2.3.1 <= lightgbm <= 3.3.2
PyTorch
pytorch
Y
Y
1.0.5 <= pytorch-lightning <= 1.7.1
Pycaret
pycaret
Y
N
- class FrameworkSupportSklearn
Bases:
SupportBaseCurrent autolog support scope for Scikit learn.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
Tags
Post Training Metrics
Search Estimators
sklearn
estimator.fit()
estimator.fit_predict()
estimator.fit_transform()
Classifier:
precision score
recall score
f1 score
accuracy score
If the classifier has method predict_proba
log loss
roc auc score
Regression:
mean squared error
root mean squared error
mean absolute error
r2 score
estimator.get_params(deep=True)
estimator class name(e.g. “LinearRegression”)
fully qualified estimator class name(e.g. “sklearn.linear_model._base.LinearRegression”)
Scikit-learn metric APIs:
model.score
metric APIs defined in the sklearn.metrics module
Note:
metric key format is: {metric_name}[-{call_index}]_{dataset_name}
if sklearn.metrics: metric_name is the metric function name
if model.score, then metric_name is {model_class_name}_score
If multiple calls are made to the same scikit-learn metric API
each subsequent call adds a “call_index” (starting from 2) to the metric key
Meta estimator:
Pipeline,
GridSearchCV ,
RandomizedSearchCV
It logs child runs with metrics for each set of
explored parameters, as well as parameters
for the best model and the best parameters (if available)
- limit = ''
- class FrameworkSupportSpark
Bases:
SupportBaseCurrent autolog support scope for Spark.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
Tags
Post Training Metrics
Search Estimators
pyspark
estimator.fit(), except for
estimators (featurizers) under pyspark.ml.feature
Not Supported
estimator.params
If a param value is also an Estimator
then params in the the wrapped estimator will also be logged, the nested
param key will be {estimator_uid}.{param_name}
estimator class name(e.g. “LinearRegression”)
fully qualified estimator class name(e.g. “pyspark.ml.regression.LinearRegression”)
pyspark ML evaluators used under Evaluator.evaluate
metric key format is: {metric_name}[-{call_index}]_{dataset_name}
Metric name: Evaluator.getMetricName()
If multiple calls are made to the same pyspark ML evaluator metric API
each subsequent call adds a “call_index” (starting from 2) to the metric key
Meta estimator:
Pipeline,
CrossValidator,
TrainValidationSplit,
OneVsRest
It logs child runs with metrics for each set of
explored parameters, as well as parameters
for the best model and the best parameters (if available)
- limit = ''
- class FrameworkSupportKeras
Bases:
SupportBaseCurrent autolog support scope for Keras.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
keras
estimator.fit()
Training loss,
Validation loss,
User specified metrics,
Metricss related EarlyStopping callbacks:
stopped_epoch,
restored_epoch,
restore_best_weight,
last_epoch etc.
fit() or fit_generator() params,
Optimizer name,
Learning rate,
Epsilon,
Params related to EarlyStopping:
min-delta,
patience,
baseline,
restore_best_weights etc.
- limit = ''
- class FrameworkSupportTensorflow
Bases:
SupportBaseCurrent autolog support scope for Tensorflow.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
tensorflow
estimator.fit()
Training loss,
Validation loss,
User specified metrics,
Metricss related EarlyStopping callbacks:
stopped_epoch,
restored_epoch,
restore_best_weight,
last_epoch etc.
TensorBoard metrics:
average_loss,
loss
Tensorflow Core:
tf.summary.scalar calls
fit() or fit_generator() params,
Optimizer name,
Learning rate,
Epsilon,
Params related to EarlyStopping:
min-delta,
patience,
baseline,
restore_best_weights etc.
Tensorboard params:
steps,
max_steps
- limit = ''
- class FrameworkSupportXGB
Bases:
SupportBaseCurrent autolog support scope for XGBoost.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
xgboost
xgboost.train(),
scikit-learn APIs.`fit()`
Metrics at each iteration (if evals specified),
Metrics at best iteration (if early_stopping_rounds specified)
params specified in xgboost.train or fit()
- limit = ''
- class FrameworkSupportLGBM
Bases:
SupportBaseCurrent autolog support scope for LightGBM.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
lightgbm
lightgbm.train()
Metrics at each iteration (if evals specified),
Metrics at best iteration (if early_stopping_rounds specified)
params specified in lightgbm.train
- limit = ''
- class FrameworkSupportPyTorch
Bases:
SupportBaseCurrent autolog support scope for PyTorch.
Available Options:
Framework Name
Trigger Methods
Training Metrics
Parameters
pytorch
pytorch_lightning.Trainer()
i.e., models that subclass pytorch_lightning.LightningModule
Training loss,
Validation loss,
average_test_accuracy,
user defined metrics,
Metricss related EarlyStopping callbacks:
stopped_epoch,
restored_epoch,
restore_best_weight,
last_epoch etc.
fit() parameters,
optimizer name,
learning rate,
epsilon,
Params related to EarlyStopping:
min-delta,
patience,
baseline,
restore_best_weights etc.
- limit = ''