MLflow
Important Capabilities
Capability | Status | Notes |
---|---|---|
Descriptions | ✅ | Extract descriptions for MLflow Registered Models and Model Versions |
Detect Deleted Entities | ✅ | Optionally enabled via stateful_ingestion.remove_stale_metadata |
Extract Tags | ✅ | Extract tags for MLflow Registered Model Stages |
Concept Mapping
This ingestion source maps the following MLflow Concepts to DataHub Concepts:
Source Concept | DataHub Concept | Notes |
---|---|---|
Registered Model | MlModelGroup | The name of a Model Group is the same as a Registered Model's name (e.g. my_mlflow_model). Registered Models serve as containers for multiple versions of the same model in MLflow. |
Model Version | MlModel | The name of a Model is {registered_model_name}{model_name_separator}{model_version} (e.g. my_mlflow_model_1 for Registered Model named my_mlflow_model and Version 1, my_mlflow_model_2, etc.). Each Model Version represents a specific iteration of a model with its own artifacts and metadata. |
Experiment | Container | Each Experiment in MLflow is mapped to a Container in DataHub. Experiments organize related runs and serve as logical groupings for model development iterations, allowing tracking of parameters, metrics, and artifacts. |
Run | DataProcessInstance | Captures the run's execution details, parameters, metrics, and lineage to a model. |
Model Stage | Tag | The mapping between Model Stages and generated Tags is the following: - Production: mlflow_production - Staging: mlflow_staging - Archived: mlflow_archived - None: mlflow_none. Model Stages indicate the deployment status of each version. |
CLI based Ingestion
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
source:
type: mlflow
config:
# Coordinates
tracking_uri: tracking_uri
sink:
# sink configs
Config Details
- Options
- Schema
Note that a .
is used to denote nested fields in the YAML recipe.
Field | Description |
---|---|
base_external_url string | Base URL to use when constructing external URLs to MLflow. If not set, tracking_uri is used if it's an HTTP URL. If neither is set, external URLs are not generated. |
materialize_dataset_inputs boolean | Whether to materialize dataset inputs for each run Default: False |
model_name_separator string | A string which separates model name from its version (e.g. model_1 or model-1) Default: _ |
password string | Password for MLflow authentication |
registry_uri string | Registry server URI. If not set, an MLflow default registry_uri is used (value of tracking_uri or MLFLOW_REGISTRY_URI environment variable) |
source_mapping_to_platform object | Mapping of source type to datahub platform |
tracking_uri string | Tracking server URI. If not set, an MLflow default tracking_uri is used (local mlruns/ directory or MLFLOW_TRACKING_URI environment variable) |
username string | Username for MLflow authentication |
env string | The environment that all assets produced by this connector belong to Default: PROD |
stateful_ingestion StatefulIngestionConfig | Stateful Ingestion Config |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
The JSONSchema for this configuration is inlined below.
{
"title": "MLflowConfig",
"description": "Base configuration class for stateful ingestion for source configs to inherit from.",
"type": "object",
"properties": {
"env": {
"title": "Env",
"description": "The environment that all assets produced by this connector belong to",
"default": "PROD",
"type": "string"
},
"stateful_ingestion": {
"title": "Stateful Ingestion",
"description": "Stateful Ingestion Config",
"allOf": [
{
"$ref": "#/definitions/StatefulIngestionConfig"
}
]
},
"tracking_uri": {
"title": "Tracking Uri",
"description": "Tracking server URI. If not set, an MLflow default tracking_uri is used (local `mlruns/` directory or `MLFLOW_TRACKING_URI` environment variable)",
"type": "string"
},
"registry_uri": {
"title": "Registry Uri",
"description": "Registry server URI. If not set, an MLflow default registry_uri is used (value of tracking_uri or `MLFLOW_REGISTRY_URI` environment variable)",
"type": "string"
},
"model_name_separator": {
"title": "Model Name Separator",
"description": "A string which separates model name from its version (e.g. model_1 or model-1)",
"default": "_",
"type": "string"
},
"base_external_url": {
"title": "Base External Url",
"description": "Base URL to use when constructing external URLs to MLflow. If not set, tracking_uri is used if it's an HTTP URL. If neither is set, external URLs are not generated.",
"type": "string"
},
"materialize_dataset_inputs": {
"title": "Materialize Dataset Inputs",
"description": "Whether to materialize dataset inputs for each run",
"default": false,
"type": "boolean"
},
"source_mapping_to_platform": {
"title": "Source Mapping To Platform",
"description": "Mapping of source type to datahub platform",
"type": "object"
},
"username": {
"title": "Username",
"description": "Username for MLflow authentication",
"type": "string"
},
"password": {
"title": "Password",
"description": "Password for MLflow authentication",
"type": "string"
}
},
"additionalProperties": false,
"definitions": {
"DynamicTypedStateProviderConfig": {
"title": "DynamicTypedStateProviderConfig",
"type": "object",
"properties": {
"type": {
"title": "Type",
"description": "The type of the state provider to use. For DataHub use `datahub`",
"type": "string"
},
"config": {
"title": "Config",
"description": "The configuration required for initializing the state provider. Default: The datahub_api config if set at pipeline level. Otherwise, the default DatahubClientConfig. See the defaults (https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/src/datahub/ingestion/graph/client.py#L19).",
"default": {},
"type": "object"
}
},
"required": [
"type"
],
"additionalProperties": false
},
"StatefulIngestionConfig": {
"title": "StatefulIngestionConfig",
"description": "Basic Stateful Ingestion Specific Configuration for any source.",
"type": "object",
"properties": {
"enabled": {
"title": "Enabled",
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"default": false,
"type": "boolean"
}
},
"additionalProperties": false
}
}
}
Auth Configuration
You can configure the MLflow source to authenticate with the MLflow server using the username
and password
configuration options.
source:
type: mlflow
config:
tracking_uri: "http://127.0.0.1:5000"
username: <username>
password: <password>
Dataset Lineage
You can map MLflow run datasets to specific DataHub platforms using the source_mapping_to_platform
configuration option. This allows you to specify which DataHub platform should be associated with datasets from different MLflow engines.
Example:
source_mapping_to_platform:
huggingface: snowflake # Maps Hugging Face datasets to Snowflake platform
http: s3 # Maps HTTP data sources to s3 platform
By default, DataHub will attempt to connect lineage with existing datasets based on the platform and name, but will not create new datasets if they don't exist.
To enable automatic dataset creation and lineage mapping, use the materialize_dataset_inputs
option:
materlize_dataset_inputs: true # Creates new datasets if they don't exist
You can configure these options independently:
# Only map to existing datasets
materlize_dataset_inputs: false
source_mapping_to_platform:
huggingface: snowflake # Maps Hugging Face datasets to Snowflake platform
pytorch: snowflake # Maps PyTorch datasets to Snowflake platform
# Create new datasets and map platforms
materlize_dataset_inputs: true
source_mapping_to_platform:
huggingface: snowflake
pytorch: snowflake
Code Coordinates
- Class Name:
datahub.ingestion.source.mlflow.MLflowSource
- Browse on GitHub
Questions
If you've got any questions on configuring ingestion for MLflow, feel free to ping us on our Slack.