External Model

Note

Sample formats can be downloaded from here txt

class ExternalModelFactsElements(facts_client: FactsClientAdapter)

Bases: object

save_external_model_asset(model_identifier: str, name: str, description: str = None, model_details: ModelDetails = None, schemas: ExternalModelSchemas = None, training_data_reference: TrainingDataReference = None, deployment_details: DeploymentDetails = None, model_entry_props: ModelEntryProps = None, catalog_id: str = None) ModelAssetUtilities

Warning

Parameter(s) model_entry_props deprecated. Use save_external_model_asset().add_tracking_model_usecase() to create/link to model usecase instead. This param(s) will be removed in a future release.;

Save External model assets in catalog and (Optional) link to model usecase. By default external model is goig to save in Platform Asset Catalog ( PAC ), if user wants to save it to different catalog user has to pass catalog_id parameter.

Parameters:
  • model_identifier (str) – Identifier specific to ML providers (i.e., Azure ML service: service_id, AWS Sagemaker:model_name)

  • name (str) – Name of the model

  • description (str) – (Optional) description of the model

  • model_details (ModelDetails) – (Optional) Model details. Supported only after CP4D >= 4.7.0

  • schemas (ExternalModelSchemas) – (Optional) Input and Output schema of the model

  • training_data_reference (TrainingDataReference) – (Optional) Training data schema

  • deployment_details (DeploymentDetails) – (Optional) Model deployment details

  • model_entry_props (ModelEntryProps) – (Optional) Properties about model usecase and model usecase catalog.

  • catalog_id (str) – (Optional) catalog id as external model can be saved in catalog itslef..

Return type:

ModelAssetUtilities

If using external models with manual log option, initiate client as:

from ibm_aigov_facts_client import AIGovFactsClient
client= AIGovFactsClient(api_key=API_KEY,experiment_name="external",enable_autolog=False,external_model=True)

If using external models with Autolog, initiate client as:

from ibm_aigov_facts_client import AIGovFactsClient
client= AIGovFactsClient(api_key=API_KEY,experiment_name="external",external_model=True)

If using external models with no tracing, initiate client as:

from ibm_aigov_facts_client import AIGovFactsClient
client= AIGovFactsClient(api_key=API_KEY,external_model=True,disable_tracing=True)

If using Cloud pak for Data:

creds=CloudPakforDataConfig(service_url="<HOST URL>",
                            username="<username>",
                            password="<password>")

client = AIGovFactsClient(experiment_name=<experiment_name>,external_model=True,cloud_pak_for_data_configs=creds)

Payload example by supported external providers:

Azure ML Service:

from ibm_aigov_facts_client.supporting_classes.factsheet_utils import DeploymentDetails,TrainingDataReference,ExternalModelSchemas

external_schemas=ExternalModelSchemas(input=input_schema,output=output_schema)
trainingdataref=TrainingDataReference(schema=training_ref)
deployment=DeploymentDetails(identifier=<service_url in Azure>,name="deploymentname",deployment_type="online",scoring_endpoint="test/score")

client.external_model_facts.save_external_model_asset(model_identifier=<service_id in Azure>
                                                            ,name=<model_name>
                                                            ,model_details=<model_stub_details>
                                                            ,deployment_details=deployment
                                                            ,schemas=external_schemas
                                                            ,training_data_reference=tdataref)

client.external_model_facts.save_external_model_asset(model_identifier=<service_id in Azure>
                                                            ,name=<model_name>
                                                            ,model_details=<model_stub_details>
                                                            ,deployment_details=deployment
                                                            ,schemas=external_schemas
                                                            ,training_data_reference=tdataref,
                                                            ,catalog_id=<catalog_id>) Different catalog_id other than Platform Asset Catalog (PAC)

AWS Sagemaker:

external_schemas=ExternalModelSchemas(input=input_schema,output=output_schema)
trainingdataref=TrainingDataReference(schema=training_ref)
deployment=DeploymentDetails(identifier=<endpoint_name in Sagemaker>,name="deploymentname",deployment_type="online",scoring_endpoint="test/score")

client.external_model_facts.save_external_model_asset(model_identifier=<model_name in Sagemaker>
                                                            ,name=<model_name>
                                                            ,model_details=<model_stub_details>
                                                            ,deployment_details=deployment
                                                            ,schemas=external_schemas
                                                            ,training_data_reference=tdataref)


client.external_model_facts.save_external_model_asset(model_identifier=<model_name in Sagemaker>
                                                            ,name=<model_name>
                                                            ,model_details=<model_stub_details>
                                                            ,deployment_details=deployment
                                                            ,schemas=external_schemas
                                                            ,training_data_reference=tdataref,
                                                            ,catalog_id=<catalog_id>) Different catalog_id other than Platform Asset Catalog (PAC)

NOTE:

If you are are using Watson OpenScale to monitor this external model the evaluation results will automatically become available in the external model.

  • To enable that automatic sync of evaluation results for Sagemaker model make sure to use the Sagemaker endpoint name when creating the external model in the notebook

  • To enable that for Azure ML model make sure to use the scoring URL.

Example format: https://southcentralus.modelmanagement.azureml.net/api/subscriptions/{az_subscription_id}/resourceGroups/{az_resource_group}/ providers/Microsoft.MachineLearningServices/workspaces/{az_workspace_name}/services/{az_service_name}?api-version=2018-03-01-preview

model usecase props example, IBM Cloud and CPD:

>>> from ibm_aigov_facts_client.supporting_classes.factsheet_utils import ModelEntryProps,DeploymentDetails,TrainingDataReference,ExternalModelSchemas

Older way:

For new model usecase:

>>> props=ModelEntryProps(
            model_entry_catalog_id=<catalog_id>,
            model_entry_name=<name>,
            model_entry_desc=<description>
            )

For linking to existing model usecase:

>>> props=ModelEntryProps(
            model_entry_catalog_id=<catalog_id>,
            model_entry_id=<model_entry_id to link>
            )
>>> client.external_model_facts.save_external_model_asset(model_identifier=<model_name in Sagemaker>
                                                                ,name=<model_name>
                                                                ,model_details=<model_stub_details>
                                                                ,deployment_details=deployment
                                                                ,schemas=external_schemas
                                                                ,training_data_reference=tdataref
                                                                ,model_entry_props= props)

Current and go forward suggested way:

external_model=client.external_model_facts.save_external_model_asset(model_identifier=<service_id in Azure>
                                                    ,name=<model_name>
                                                    ,model_details=<model_stub_details>
                                                    ,deployment_details=deployment
                                                    ,schemas=external_schemas
                                                    ,training_data_reference=tdataref)

Create and link to new model usecase:

>>> external_model.add_tracking_model_usecase(model_usecase_name=<entry name>, model_usecase_catalog_id=<catalog id>)

Link to existing model usecase:

>>> external_model.add_tracking_model_usecase(model_usecase_id=<model_usecase_id>, model_usecase_catalog_id=<catalog id>)

To remove model usecase:

>>> external_model.remove_tracking_model_usecase()
unregister_model_entry(asset_id, catalog_id)

Warning

ibm_aigov_facts_client.factsheet.external_modelfacts_utility.ExternalModelFactsElements.unregister_model_entry is deprecated. This method will be removed in a future release.Use save_external_model_asset().remove_tracking_model_usecase() instead.

Unregister WKC Model usecase

Parameters:
  • asset_id (str) – WKC model usecase id

  • catalog_id (str) – Catalog ID where asset is stored

Example for IBM Cloud or CPD:

>>> client.external_model_facts.unregister_model_entry(asset_id=<model asset id>,catalog_id=<catalog_id>)
list_model_entries(catalog_id=None) list

Warning

ibm_aigov_facts_client.factsheet.external_modelfacts_utility.ExternalModelFactsElements.list_model_entries is deprecated. This method will be removed in a future release.Use client.assets.list_model_usecases() instead.

Returns all WKC Model usecase assets for a catalog

Parameters:

catalog_id (str) – (Optional) Catalog ID where you want to register model, if None list from all catalogs

Returns:

All WKC Model usecase assets for a catalog

Return type:

list

Example:

>>> client.external_model_facts.list_model_entries()
>>> client.external_model_facts.list_model_entries(catalog_id=<catalog_id>)