Guidelines
Begin by initializing the facts client at the top of the notebook before importing any modules.
When working with external models, import them in custom training scripts.
Each specific use case should be labeled as an Experiment, and any training session with one or different machine learning frameworks should be referred to as a Run. Keep in mind that each experiment can encompass multiple runs.
Use a single experiment for a given use case and notebook. This helps in avoiding the creation of multiple experiments without reason, which can affect lineage tracking and comparison.
Set custom meta attributes before storing the model using the provided utility. This ensures that facts are displayed along with stored Watson Machine Learning models and results in proper lineage tracking.
For native learners in external providers (e.g., AWS Sagemaker’s Linear learner), autolog is not supported. Use the manual log option for these cases.
When working with external models, use the same experiment name during client initiation. This aids in tracking notebook experiments along with your model asset.