SnapLogic
Overview
Snaplogic is a streaming or integration platform. Learn more in the official Snaplogic documentation.
The DataHub integration for Snaplogic covers streaming/integration entities such as topics, connectors, pipelines, or jobs. Depending on module capabilities, it can also capture features such as lineage, usage, profiling, ownership, tags, and stateful deletion detection.
Concept Mapping
| Source Concept | DataHub Concept | Notes |
|---|---|---|
| Snap-pack | Data Platform | Snap-packs are mapped to Data Platforms, either directly (e.g., Snowflake) or dynamically based on connection details (e.g., JDBC URL). |
| Table/Dataset | Dataset | May be different. It depends on snap type. For SQL databases it is a table, and for Kafka it is a topic. |
| Snap | Data Job | |
| Pipeline | Data Flow |
Module snaplogic
Important Capabilities
| Capability | Status | Notes |
|---|---|---|
| Column-level Lineage | ✅ | Enabled by default. |
| Detect Deleted Entities | ❌ | Not supported yet. |
| Platform Instance | ❌ | SnapLogic does not support platform instances. |
| Table-Level Lineage | ✅ | Enabled by default. |
Overview
The snaplogic module ingests metadata from SnapLogic into DataHub. It is intended for production ingestion workflows and module-specific capabilities are documented below.
Extracts data lineage from the SnapLogic Lineage API to track data transformations and dependencies across SnapLogic pipelines.
Prerequisites
Before running ingestion, ensure network connectivity to the source, valid authentication credentials, and read permissions for metadata APIs required by this module.
Requires valid SnapLogic credentials with access to the SnapLogic Lineage API.
Install the Plugin
pip install 'acryl-datahub[snaplogic]'
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
pipeline_name: "snaplogic_incremental_ingestion"
source:
type: snaplogic
config:
username: example@snaplogic.com
password: password
base_url: https://elastic.snaplogic.com
org_name: "ExampleOrg"
namespace_mapping:
snowflake://snaplogic: snaplogic
case_insensitive_namespaces:
- snowflake://snaplogic
stateful_ingestion:
enabled: True
remove_stale_metadata: False
Config Details
- Options
- Schema
Note that a . is used to denote nested fields in the YAML recipe.
| Field | Description |
|---|---|
org_name ✅ string | Organization name from SnapLogic instance |
password ✅ string(password) | Password |
username ✅ string | Username |
base_url string | Url to your SnapLogic instance: https://elastic.snaplogic.com, or similar. Used for making API calls to SnapLogic. Default: https://elastic.snaplogic.com |
bucket_duration Enum | One of: "DAY", "HOUR" |
create_non_snaplogic_datasets boolean | Whether to create datasets for non-SnapLogic datasets (e.g., databases, S3, etc.) Default: False |
enable_stateful_lineage_ingestion boolean | Enable stateful lineage ingestion. This will store lineage window timestamps after successful lineage ingestion. and will not run lineage ingestion for same timestamps in subsequent run. NOTE: This only works with use_queries_v2=False (legacy extraction path). For queries v2, use enable_stateful_time_window instead. Default: True |
enable_stateful_usage_ingestion boolean | Enable stateful lineage ingestion. This will store usage window timestamps after successful usage ingestion. and will not run usage ingestion for same timestamps in subsequent run. NOTE: This only works with use_queries_v2=False (legacy extraction path). For queries v2, use enable_stateful_time_window instead. Default: True |
end_time string(date-time) | Latest date of lineage/usage to consider. Default: Current time in UTC |
namespace_mapping object | Mapping of namespaces to platform instances Default: {} |
platform string | Default: SnapLogic |
start_time string(date-time) | Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'. Default: None |
case_insensitive_namespaces array | List of namespaces that should be treated as case insensitive Default: [] |
case_insensitive_namespaces.object object | |
stateful_ingestion One of StatefulStaleMetadataRemovalConfig, null | Default: None |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
stateful_ingestion.fail_safe_threshold number | Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'. Default: 75.0 |
stateful_ingestion.remove_stale_metadata boolean | Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled. Default: True |
The JSONSchema for this configuration is inlined below.
{
"$defs": {
"BucketDuration": {
"enum": [
"DAY",
"HOUR"
],
"title": "BucketDuration",
"type": "string"
},
"StatefulStaleMetadataRemovalConfig": {
"additionalProperties": false,
"description": "Base specialized config for Stateful Ingestion with stale metadata removal capability.",
"properties": {
"enabled": {
"default": false,
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"title": "Enabled",
"type": "boolean"
},
"remove_stale_metadata": {
"default": true,
"description": "Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.",
"title": "Remove Stale Metadata",
"type": "boolean"
},
"fail_safe_threshold": {
"default": 75.0,
"description": "Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.",
"maximum": 100.0,
"minimum": 0.0,
"title": "Fail Safe Threshold",
"type": "number"
}
},
"title": "StatefulStaleMetadataRemovalConfig",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"bucket_duration": {
"$ref": "#/$defs/BucketDuration",
"default": "DAY",
"description": "Size of the time window to aggregate usage stats."
},
"end_time": {
"description": "Latest date of lineage/usage to consider. Default: Current time in UTC",
"format": "date-time",
"title": "End Time",
"type": "string"
},
"start_time": {
"default": null,
"description": "Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.",
"format": "date-time",
"title": "Start Time",
"type": "string"
},
"enable_stateful_usage_ingestion": {
"default": true,
"description": "Enable stateful lineage ingestion. This will store usage window timestamps after successful usage ingestion. and will not run usage ingestion for same timestamps in subsequent run. NOTE: This only works with use_queries_v2=False (legacy extraction path). For queries v2, use enable_stateful_time_window instead.",
"title": "Enable Stateful Usage Ingestion",
"type": "boolean"
},
"enable_stateful_lineage_ingestion": {
"default": true,
"description": "Enable stateful lineage ingestion. This will store lineage window timestamps after successful lineage ingestion. and will not run lineage ingestion for same timestamps in subsequent run. NOTE: This only works with use_queries_v2=False (legacy extraction path). For queries v2, use enable_stateful_time_window instead.",
"title": "Enable Stateful Lineage Ingestion",
"type": "boolean"
},
"stateful_ingestion": {
"anyOf": [
{
"$ref": "#/$defs/StatefulStaleMetadataRemovalConfig"
},
{
"type": "null"
}
],
"default": null
},
"platform": {
"default": "SnapLogic",
"title": "Platform",
"type": "string"
},
"username": {
"description": "Username",
"title": "Username",
"type": "string"
},
"password": {
"description": "Password",
"format": "password",
"title": "Password",
"type": "string",
"writeOnly": true
},
"base_url": {
"default": "https://elastic.snaplogic.com",
"description": "Url to your SnapLogic instance: `https://elastic.snaplogic.com`, or similar. Used for making API calls to SnapLogic.",
"title": "Base Url",
"type": "string"
},
"org_name": {
"description": "Organization name from SnapLogic instance",
"title": "Org Name",
"type": "string"
},
"namespace_mapping": {
"additionalProperties": true,
"default": {},
"description": "Mapping of namespaces to platform instances",
"title": "Namespace Mapping",
"type": "object"
},
"case_insensitive_namespaces": {
"default": [],
"description": "List of namespaces that should be treated as case insensitive",
"items": {},
"title": "Case Insensitive Namespaces",
"type": "array"
},
"create_non_snaplogic_datasets": {
"default": false,
"description": "Whether to create datasets for non-SnapLogic datasets (e.g., databases, S3, etc.)",
"title": "Create Non Snaplogic Datasets",
"type": "boolean"
}
},
"required": [
"username",
"password",
"org_name"
],
"title": "SnaplogicConfig",
"type": "object"
}
Capabilities
Use the Important Capabilities table above as the source of truth for supported features and whether additional configuration is required.
Limitations
Module behavior is constrained by source APIs, permissions, and metadata exposed by the platform. Refer to capability notes for unsupported or conditional features.
Troubleshooting
If ingestion fails, validate credentials, permissions, connectivity, and scope filters first. Then review ingestion logs for source-specific errors and adjust configuration accordingly.
Code Coordinates
- Class Name:
datahub.ingestion.source.snaplogic.snaplogic.SnaplogicSource - Browse on GitHub
If you've got any questions on configuring ingestion for SnapLogic, feel free to ping us on our Slack.
This page is auto-generated from the underlying source code. To make changes, please edit the relevant source files in the metadata-ingestion directory.
Tip: For quick typo fixes or documentation updates, you can click the ✏️ Edit icon directly in the GitHub UI to open a Pull Request. For larger changes and PR naming conventions, please refer to our Contributing Guide.