Skip to main content
Version: Next

SQL Queries

Incubating

Important Capabilities

CapabilityStatusNotes
Column-level LineageParsed from SQL queries.
Table-Level LineageParsed from SQL queries.

This source reads a newline-delimited JSON file containing SQL queries and parses them to generate lineage.

Query File Format

This file should contain one JSON object per line, with the following fields:

  • query: string - The SQL query to parse.
  • timestamp (optional): number - The timestamp of the query, in seconds since the epoch.
  • user (optional): string - The user who ran the query. This user value will be directly converted into a DataHub user urn.
  • operation_type (optional): string - Platform-specific operation type, used if the operation type can't be parsed.
  • session_id (optional): string - Session identifier for temporary table resolution across queries.
  • downstream_tables (optional): string[] - Fallback list of tables that the query writes to, used if the query can't be parsed.
  • upstream_tables (optional): string[] - Fallback list of tables the query reads from, used if the query can't be parsed.

Lazy Schema Loading:

  • Fetches schemas on-demand during query parsing instead of bulk loading all schemas upfront
  • Caches fetched schemas for future lookups to avoid repeated network requests
  • Reduces initial startup time and memory usage significantly
  • Automatically handles large platforms efficiently without memory issues

Query Processing:

  • Loads the entire query file into memory at once
  • Processes all queries sequentially before generating metadata work units
  • Preserves temp table mappings and lineage relationships to ensure consistent lineage tracking
  • Query deduplication is handled automatically by the SQL parsing aggregator

Incremental Lineage

When incremental_lineage is enabled, this source will emit lineage as patches rather than full overwrites. This allows you to add lineage edges without removing existing ones, which is useful for:

  • Gradually building up lineage from multiple sources
  • Preserving manually curated lineage
  • Avoiding conflicts when multiple ingestion processes target the same datasets

Note: Incremental lineage only applies to UpstreamLineage aspects. Other aspects like queries and usage statistics will still be emitted normally.

Temporary Table Support

For platforms like Athena that don't have native temporary tables, you can use the temp_table_patterns configuration to specify regex patterns that identify fake temporary tables. This allows the source to process these tables like other sources that support native temp tables, enabling proper lineage tracking across temporary table operations.

Example Queries File

{"query": "SELECT x FROM my_table", "timestamp": 1689232738.051, "user": "user_a", "downstream_tables": [], "upstream_tables": ["my_database.my_schema.my_table"]}
{"query": "INSERT INTO my_table VALUES (1, 'a')", "timestamp": 1689232737.669, "user": "user_b", "downstream_tables": ["my_database.my_schema.my_table"], "upstream_tables": []}

Note that this file does not represent a single JSON object, but instead newline-delimited JSON, in which each line is a separate JSON object.

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

datahub_api:  # Only necessary if using a non-DataHub sink, e.g. the file sink
server: http://localhost:8080
timeout_sec: 60
source:
type: sql-queries
config:
platform: "snowflake"
default_db: "SNOWFLAKE"
query_file: "./queries.json"

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
platform 
string
The platform for which to generate data, e.g. snowflake
query_file 
string
Path to file to ingest
default_db
One of string, null
The default database to use for unqualified table names
Default: None
default_schema
One of string, null
The default schema to use for unqualified table names
Default: None
enable_lazy_schema_loading
boolean
Enable lazy schema loading for better performance. When enabled, schemas are fetched on-demand instead of bulk loading all schemas upfront, reducing startup time and memory usage.
Default: True
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
override_dialect
One of string, null
The SQL dialect to use when parsing queries. Overrides automatic dialect detection.
Default: None
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
aws_config
One of AwsConnectionConfig, null
AWS configuration for S3 access. Required when query_file is an S3 URI (s3://).
Default: None
aws_config.aws_access_key_id
One of string, null
AWS access key ID. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_config.aws_advanced_config
object
Advanced AWS configuration options. These are passed directly to botocore.config.Config.
aws_config.aws_endpoint_url
One of string, null
The AWS service endpoint. This is normally constructed automatically, but can be overridden here.
Default: None
aws_config.aws_profile
One of string, null
The named profile to use from AWS credentials. Falls back to default profile if not specified and no access keys provided. Profiles are configured in ~/.aws/credentials or ~/.aws/config.
Default: None
aws_config.aws_proxy
One of string, null
A set of proxy configs to use with AWS. See the botocore.config docs for details.
Default: None
aws_config.aws_region
One of string, null
AWS region code.
Default: None
aws_config.aws_retry_mode
Enum
One of: "legacy", "standard", "adaptive"
Default: standard
aws_config.aws_retry_num
integer
Number of times to retry failed AWS requests. See the botocore.retry docs for details.
Default: 5
aws_config.aws_secret_access_key
One of string, null
AWS secret access key. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_config.aws_session_token
One of string, null
AWS session token. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_config.read_timeout
number
The timeout for reading from the connection (in seconds).
Default: 60
aws_config.aws_role
One of string, array, null
AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are the same as boto3's STS.Client.assume_role.
Default: None
aws_config.aws_role.union
One of string, AwsAssumeRoleConfig
aws_config.aws_role.union.RoleArn 
string
ARN of the role to assume.
aws_config.aws_role.union.ExternalId
One of string, null
External ID to use when assuming the role.
Default: None
temp_table_patterns
array
Regex patterns for temporary tables to filter in lineage ingestion. Specify regex to match the entire table name. This is useful for platforms like Athena that don't have native temp tables but use naming patterns for fake temp tables.
Default: []
temp_table_patterns.string
string
usage
BaseUsageConfig
usage.bucket_duration
Enum
One of: "DAY", "HOUR"
usage.end_time
string(date-time)
Latest date of lineage/usage to consider. Default: Current time in UTC
usage.format_sql_queries
boolean
Whether to format sql queries
Default: False
usage.include_operational_stats
boolean
Whether to display operational stats.
Default: True
usage.include_read_operational_stats
boolean
Whether to report read operational stats. Experimental.
Default: False
usage.include_top_n_queries
boolean
Whether to ingest the top_n_queries.
Default: True
usage.start_time
string(date-time)
Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.
Default: None
usage.top_n_queries
integer
Number of top queries to save to each table.
Default: 10
usage.user_email_pattern
AllowDenyPattern
A class to store allow deny regexes
usage.user_email_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True

Code Coordinates

  • Class Name: datahub.ingestion.source.sql_queries.SqlQueriesSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for SQL Queries, feel free to ping us on our Slack.