Skip to main content

MLflow

Overview

MLflow is a machine learning platform. Learn more in the official MLflow documentation.

The DataHub integration for MLflow covers ML entities such as models, features, and related lineage metadata. Depending on module capabilities, it can also capture features such as lineage, usage, profiling, ownership, tags, and stateful deletion detection.

Concept Mapping

Source ConceptDataHub ConceptNotes
Registered ModelMlModelGroupThe name of a Model Group is the same as a Registered Model's name (e.g. my_mlflow_model). Registered Models serve as containers for multiple versions of the same model in MLflow.
Model VersionMlModelThe name of a Model is {registered_model_name}{model_name_separator}{model_version} (e.g. my_mlflow_model_1 for Registered Model named my_mlflow_model and Version 1, my_mlflow_model_2, etc.). Each Model Version represents a specific iteration of a model with its own artifacts and metadata.
ExperimentContainerEach Experiment in MLflow is mapped to a Container in DataHub. Experiments organize related runs and serve as logical groupings for model development iterations, allowing tracking of parameters, metrics, and artifacts.
RunDataProcessInstanceCaptures the run's execution details, parameters, metrics, and lineage to a model.
Model StageTagThe mapping between Model Stages and generated Tags is the following:
- Production: mlflow_production
- Staging: mlflow_staging
- Archived: mlflow_archived
- None: mlflow_none. Model Stages indicate the deployment status of each version.

Module mlflow

Incubating

Important Capabilities

CapabilityStatusNotes
Asset ContainersExtract ML experiments. Supported for types - ML Experiment.
DescriptionsExtract descriptions for MLflow Registered Models and Model Versions.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
Extract TagsExtract tags for MLflow Registered Model Stages.

Overview

The mlflow module ingests metadata from Mlflow into DataHub. It is intended for production ingestion workflows and module-specific capabilities are documented below.

Prerequisites

Before running ingestion, ensure network connectivity to the source, valid authentication credentials, and read permissions for metadata APIs required by this module.

Install the Plugin

pip install 'acryl-datahub[mlflow]'

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: mlflow
config:
# Coordinates
tracking_uri: tracking_uri

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
base_external_url
One of string, null
Base URL to use when constructing external URLs to MLflow. If not set, tracking_uri is used if it's an HTTP URL. If neither is set, external URLs are not generated.
Default: None
materialize_dataset_inputs
One of boolean, null
Whether to materialize dataset inputs for each run
Default: False
model_name_separator
string
A string which separates model name from its version (e.g. model_1 or model-1)
Default: _
password
One of string(password), null
Password for MLflow authentication
Default: None
registry_uri
One of string, null
Registry server URI. If not set, an MLflow default registry_uri is used (value of tracking_uri or MLFLOW_REGISTRY_URI environment variable)
Default: None
source_mapping_to_platform
One of object, null
Mapping of source type to datahub platform
Default: None
tracking_uri
One of string, null
Tracking server URI. If not set, an MLflow default tracking_uri is used (local mlruns/ directory or MLFLOW_TRACKING_URI environment variable)
Default: None
username
One of string, null
Username for MLflow authentication
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Capabilities

Use the Important Capabilities table above as the source of truth for supported features and whether additional configuration is required.

Version Compatibility

This connector requires an MLflow server version 1.28.0 or later.
If you're using an earlier version, ingestion of Experiments and Runs will be skipped.

Auth Configuration

You can configure the MLflow source to authenticate with the MLflow server using the username and password configuration options.

source:
type: mlflow
config:
tracking_uri: "http://127.0.0.1:5000"
username: <username>
password: <password>

Dataset Lineage

You can map MLflow run datasets to specific DataHub platforms using the source_mapping_to_platform configuration option. This allows you to specify which DataHub platform should be associated with datasets from different MLflow engines.

Example:

source_mapping_to_platform:
huggingface: snowflake # Maps Hugging Face datasets to Snowflake platform
http: s3 # Maps HTTP data sources to s3 platform

Default behavior: Links to existing datasets by platform and name; does not create new datasets.

To create datasets automatically, enable materialize_dataset_inputs:

materlize_dataset_inputs: true # Creates new datasets if they don't exist

You can configure these options independently:

# Only map to existing datasets
materlize_dataset_inputs: false
source_mapping_to_platform:
huggingface: snowflake # Maps Hugging Face datasets to Snowflake platform
pytorch: snowflake # Maps PyTorch datasets to Snowflake platform

# Create new datasets and map platforms
materlize_dataset_inputs: true
source_mapping_to_platform:
huggingface: snowflake
pytorch: snowflake

Limitations

Module behavior is constrained by source APIs, permissions, and metadata exposed by the platform. Refer to capability notes for unsupported or conditional features.

Troubleshooting

If ingestion fails, validate credentials, permissions, connectivity, and scope filters first. Then review ingestion logs for source-specific errors and adjust configuration accordingly.

Code Coordinates

  • Class Name: datahub.ingestion.source.mlflow.MLflowSource
  • Browse on GitHub
Questions?

If you've got any questions on configuring ingestion for MLflow, feel free to ping us on our Slack.

💡 Contributing to this documentation

This page is auto-generated from the underlying source code. To make changes, please edit the relevant source files in the metadata-ingestion directory.

Tip: For quick typo fixes or documentation updates, you can click the ✏️ Edit icon directly in the GitHub UI to open a Pull Request. For larger changes and PR naming conventions, please refer to our Contributing Guide.