Skip to main content

Postgres

Certified

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default. Supported for types - Database, Schema.
ClassificationOptionally enabled via classification.enabled.
Column-level LineageEnabled by default to get lineage for views via include_view_column_lineage. Supported for types - View.
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
DomainsEnabled by default.
Platform InstanceEnabled by default.
Schema MetadataEnabled by default.
Table-Level LineageEnabled by default to get lineage for views via include_view_lineage. Supported for types - View.
Test ConnectionEnabled by default.

This plugin extracts the following:

  • Metadata for databases, schemas, views, tables, and stored procedures
  • Column types associated with each table
  • Also supports PostGIS extensions
  • Table, row, and column statistics via optional SQL profiling

Query-Based Lineage for PostgreSQL

DataHub can extract table-level lineage from your PostgreSQL query history using the pg_stat_statements extension. This feature analyzes executed SQL queries to automatically discover upstream and downstream dataset dependencies.

Overview

The query-based lineage feature:

  • Extracts lineage from INSERT...SELECT, CREATE TABLE AS SELECT, CREATE VIEW, and other DML/DDL statements
  • Prioritizes important queries by execution frequency and time
  • Respects your filters using configurable exclude patterns
  • Generates usage statistics showing which tables are queried and by whom

Prerequisites

0. PostgreSQL Version Requirement

PostgreSQL 13 or later is required for query-based lineage extraction.

Why: PostgreSQL 13 changed column names in the pg_stat_statements view:

  • PostgreSQL 12 and earlier: total_time, min_time, max_time, mean_time
  • PostgreSQL 13+: total_exec_time, min_exec_time, max_exec_time, mean_exec_time

The DataHub connector uses the PostgreSQL 13+ column names. If you attempt to use this feature with PostgreSQL 12 or earlier, you'll receive a clear error message:

PostgreSQL version 12.0 detected. Query-based lineage requires PostgreSQL 13+
due to column name changes in pg_stat_statements (total_time -> total_exec_time).
Please upgrade to PostgreSQL 13 or later.

Solution: Upgrade to PostgreSQL 13 or later to use query-based lineage extraction.

1. Enable pg_stat_statements Extension

The pg_stat_statements extension must be installed and loaded. This extension tracks query execution statistics.

Step 1: Load the extension in postgresql.conf

Add or update the following line in your postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

If you already have other extensions loaded (e.g., 'pg_cron,pg_stat_statements'), append pg_stat_statements to the comma-separated list.

Step 2: Restart PostgreSQL

The extension requires a database restart to load:

# On Linux (systemd)
sudo systemctl restart postgresql

# On macOS (Homebrew)
brew services restart postgresql

# Or using pg_ctl
pg_ctl restart -D /path/to/data/directory

Step 3: Create the extension in your database

Connect to each database you want to monitor and create the extension:

-- Connect to your database
\c your_database

-- Create the extension (requires superuser or CREATE privileges)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Verify installation
SELECT * FROM pg_extension WHERE extname = 'pg_stat_statements';

2. Grant Required Permissions

The DataHub user needs permission to read from pg_stat_statements.

Option 1: Grant pg_read_all_stats role (PostgreSQL 10+, recommended)

-- Grant the pg_read_all_stats role to your DataHub user
GRANT pg_read_all_stats TO datahub_user;

Option 2: Use a superuser account

If your PostgreSQL version doesn't have pg_read_all_stats, you can use a superuser account. However, this is not recommended for production due to security implications.

Verify permissions

-- Check if user has the required role
SELECT
pg_has_role(current_user, 'pg_read_all_stats', 'MEMBER') as has_stats_role,
usesuper as is_superuser
FROM pg_user
WHERE usename = current_user;

The query should return true for at least one column.

3. Configure Query Retention (Optional)

By default, pg_stat_statements stores the last 5000 queries. You can adjust this in postgresql.conf:

# Maximum number of queries tracked
pg_stat_statements.max = 10000

# Track nested statements (functions, procedures)
pg_stat_statements.track = all

After changing these settings, restart PostgreSQL.

Configuration

Enable query-based lineage in your DataHub recipe:

source:
type: postgres
config:
host_port: "localhost:5432"
database: "your_database"
username: "datahub_user"
password: "your_password"

# Enable query-based lineage extraction
include_query_lineage: true

# Optional: Configure lineage extraction
max_queries_to_extract: 1000 # Default: 1000
min_query_calls: 10 # Only extract queries executed ≥10 times

# Optional: Exclude specific query patterns
query_exclude_patterns:
- "%pg_catalog%" # Exclude system catalog queries
- "%temp_%" # Exclude temporary table queries
- "%staging%" # Exclude staging queries

# Optional: Enable usage statistics
include_usage_statistics: true

Configuration Options

OptionTypeDefaultDescription
include_query_lineagebooleanfalseEnable query-based lineage extraction from pg_stat_statements
max_queries_to_extractinteger1000Maximum number of queries to extract. Queries are prioritized by execution time and frequency.
min_query_callsinteger1Minimum number of times a query must be executed to be included in lineage analysis. Higher values focus on frequently-used queries.
query_exclude_patternslist[string][]SQL LIKE patterns to exclude queries. Patterns are case-insensitive. Example: "%pg_catalog%" excludes all queries containing pg_catalog.
include_usage_statisticsbooleanfalseGenerate dataset usage metrics from query history. Requires include_query_lineage: true. Shows unique user counts, query frequencies, and column access patterns in the DataHub UI.

Supported Lineage Patterns

The lineage extractor recognizes common SQL patterns:

INSERT...SELECT

INSERT INTO target_table (col1, col2)
SELECT col1, col2 FROM source_table;

Lineage: source_tabletarget_table

CREATE TABLE AS SELECT (CTAS)

CREATE TABLE new_table AS
SELECT a.col1, b.col2
FROM table_a a
JOIN table_b b ON a.id = b.id;

Lineage: table_a, table_bnew_table

CREATE VIEW

CREATE VIEW customer_summary AS
SELECT c.customer_id, COUNT(o.order_id) as order_count
FROM customers c
LEFT JOIN orders o ON c.customer_id = o.customer_id
GROUP BY c.customer_id;

Lineage: customers, orderscustomer_summary

Complex JOINs and CTEs

WITH monthly_revenue AS (
SELECT customer_id, SUM(amount) as revenue
FROM transactions
WHERE date >= '2024-01-01'
GROUP BY customer_id
)
INSERT INTO customer_metrics (customer_id, total_revenue)
SELECT mr.customer_id, mr.revenue
FROM monthly_revenue mr
JOIN customers c ON mr.customer_id = c.id
WHERE c.active = true;

Lineage: transactions, customerscustomer_metrics

Verification

After running ingestion, verify that lineage was extracted:

1. Check ingestion logs

Look for messages like:

INFO - Prerequisites check: Prerequisites met
INFO - Extracted 850 queries from pg_stat_statements in 1.23 seconds
INFO - Processed 850 queries for lineage extraction (12 failed) in 4.56 seconds

2. Query pg_stat_statements directly

-- Check if queries are being tracked
SELECT COUNT(*) FROM pg_stat_statements;

-- View most frequently executed queries
SELECT
calls,
total_exec_time / 1000 as total_seconds,
query
FROM pg_stat_statements
ORDER BY calls DESC
LIMIT 10;

3. Verify in DataHub UI

Navigate to a dataset in DataHub and check the "Lineage" tab. You should see upstream and downstream dependencies derived from query history.

Troubleshooting

PostgreSQL Version Too Old

Error message:

ERROR - PostgreSQL version 12.0 detected. Query-based lineage requires PostgreSQL 13+
due to column name changes in pg_stat_statements (total_time -> total_exec_time).
Please upgrade to PostgreSQL 13 or later.

Solution:

Upgrade your PostgreSQL installation to version 13 or later. PostgreSQL 13 was released in September 2020 and introduced breaking changes to the pg_stat_statements view column names.

Check your current version:

SELECT version();
-- Or
SHOW server_version;

Upgrade path:

  • PostgreSQL Upgrade Documentation
  • Use pg_upgrade for in-place upgrades
  • Consider managed services (AWS RDS, Google Cloud SQL, Azure Database) which support easy version upgrades

Extension Not Installed

Error message:

ERROR - pg_stat_statements extension not installed. Install with: CREATE EXTENSION pg_stat_statements;

Solution:

  1. Verify shared_preload_libraries includes pg_stat_statements in postgresql.conf
  2. Restart PostgreSQL
  3. Connect to your database and run CREATE EXTENSION pg_stat_statements;

Permission Denied

Error message:

ERROR - Insufficient permissions. Grant pg_read_all_stats role: GRANT pg_read_all_stats TO <user>;

Solution:

-- Grant the required role
GRANT pg_read_all_stats TO datahub_user;

-- Or verify current permissions
SELECT
pg_has_role(current_user, 'pg_read_all_stats', 'MEMBER') as has_stats_role,
usesuper as is_superuser
FROM pg_user
WHERE usename = current_user;

No Queries Extracted

Possible causes:

  1. No queries in pg_stat_statements

    • The extension tracks queries since the last reset or restart
    • Run some queries against your database, then re-run ingestion
  2. Queries excluded by filters

    • Check min_query_calls - lower this value to include less-frequent queries
    • Review query_exclude_patterns - ensure you're not excluding too broadly
  3. Empty pg_stat_statements

    -- Check if queries are being tracked
    SELECT COUNT(*) FROM pg_stat_statements;

    -- If 0, reset and run some test queries
    SELECT pg_stat_statements_reset();

    -- Run sample queries
    SELECT * FROM your_table LIMIT 10;

    -- Verify queries were tracked
    SELECT COUNT(*) FROM pg_stat_statements;

Query Text Truncated

By default, PostgreSQL truncates query text to 1024 characters. Increase this limit in postgresql.conf:

# Increase max query length tracked (requires restart)
track_activity_query_size = 4096

Performance Considerations

Memory usage:

  • Each tracked query consumes ~1KB in shared memory
  • Default 5000 queries = ~5MB memory
  • Adjust pg_stat_statements.max based on your available memory

Query overhead:

  • The extension adds minimal overhead (<1%) to query execution
  • Statistics are updated asynchronously

Ingestion performance:

  • Extracting 1000 queries takes 1-5 seconds depending on database load
  • Use max_queries_to_extract to limit extraction time
  • Schedule ingestion during off-peak hours for large query volumes

Limitations

  1. Historical data only

    • Lineage is extracted from executed queries, not from schema definitions
    • Queries must have been executed since the last pg_stat_statements_reset()
  2. Dynamic SQL

    • Parameterized queries show parameter placeholders, not actual values
    • Example: SELECT * FROM users WHERE id = $1 (value not captured)
  3. Complex transformations

    • The extractor may not parse extremely complex queries with nested CTEs or exotic syntax
    • Failed queries are logged but don't block ingestion
  4. No column-level lineage

    • Currently supports table-level lineage only
    • Column-level lineage may be added in future releases

Best Practices

  1. Reset pg_stat_statements periodically

    -- Reset statistics (caution: clears all tracked queries)
    SELECT pg_stat_statements_reset();

    This prevents unbounded memory growth and focuses on recent query patterns.

  2. Use meaningful filters

    • Exclude test, temporary, and system queries with query_exclude_patterns
    • Set min_query_calls to focus on production workloads
  3. Monitor memory usage

    -- Check current query count
    SELECT COUNT(*) as query_count FROM pg_stat_statements;

    -- Check memory usage
    SELECT pg_size_pretty(
    pg_database_size('your_database')
    );
  4. Schedule regular ingestion

    • Run ingestion daily or weekly to capture lineage from new queries
    • More frequent ingestion provides more up-to-date lineage graphs

See Also

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: postgres
config:
# Coordinates
host_port: localhost:5432
database: DemoDatabase

# Credentials
username: user
password: pass

# Optional: SSL configuration.
# options:
# connect_args:
# sslcert: "<<path to sslcert>>"
# sslkey: "<<path to sslkey>>"
# sslrootcert: "<<path to verification ca chain>>"
# sslmode: "verify-full"

# AWS RDS IAM Authentication (alternative to password)
# auth_mode: "AWS_IAM"
# aws_config:
# aws_region: us-west-2

# Advanced AWS configuration with profiles, role assumption, and retry settings:
# auth_mode: "AWS_IAM"
# aws_config:
# aws_region: us-west-2
# aws_profile: production
# aws_role: "arn:aws:iam::123456789:role/DataHubRole"
# aws_retry_num: 10
# aws_retry_mode: adaptive

# When auth_mode is "AWS_IAM", the password field is ignored
# AWS credentials can be configured via AWS CLI, environment variables, or IAM role

# Query-based Lineage (optional)
# Extract lineage from executed queries using pg_stat_statements extension
# Prerequisites:
# 1. PostgreSQL 13 or later (required for pg_stat_statements column names)
# Note: PostgreSQL 12 and earlier use different column names (total_time
# instead of total_exec_time). The connector will detect older versions
# and fail with a clear error message. Upgrade to PostgreSQL 13+ to use
# this feature.
# 2. CREATE EXTENSION pg_stat_statements;
# 3. GRANT pg_read_all_stats TO datahub_user;
# include_query_lineage: true
# max_queries_to_extract: 1000 # Default: 1000, valid range: 1-10000
# min_query_calls: 10 # Default: 1, minimum: 0
# query_exclude_patterns:
# - "%pg_catalog%"
# - "%temp_%"

# Usage Statistics (optional, requires include_query_lineage: true)
# Generates dataset usage metrics from query history:
# - Unique user counts per table
# - Query frequencies and top N most common queries
# - Column-level access patterns
# Results appear in DataHub UI: Dataset Profile > Usage tab
# include_usage_statistics: true
# top_n_queries: 20 # Number of top queries to save per table
# bucket_duration: DAY # Time window for aggregating usage (HOUR, DAY, WEEK)
# format_sql_queries: false # Whether to format SQL queries for readability

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
host_port 
string
host URL
auth_mode
Enum
One of: "PASSWORD", "AWS_IAM"
bucket_duration
Enum
One of: "DAY", "HOUR"
convert_urns_to_lowercase
boolean
Whether to convert dataset urns to lowercase.
Default: False
database
One of string, null
database (catalog). If set to Null, all databases will be considered for ingestion.
Default: None
end_time
string(date-time)
Latest date of lineage/usage to consider. Default: Current time in UTC
format_sql_queries
boolean
Whether to format sql queries
Default: False
include_operational_stats
boolean
Whether to display operational stats.
Default: True
include_query_lineage
boolean
Enable query-based lineage extraction from pg_stat_statements. Requires the pg_stat_statements extension to be installed and enabled. See documentation for setup instructions.
Default: False
include_read_operational_stats
boolean
Whether to report read operational stats. Experimental.
Default: False
include_stored_procedures
boolean
Include ingest of stored procedures.
Default: True
include_table_location_lineage
boolean
If the source supports it, include table lineage to the underlying storage location.
Default: True
include_tables
boolean
Whether tables should be ingested.
Default: True
include_top_n_queries
boolean
Whether to ingest the top_n_queries.
Default: True
include_usage_statistics
boolean
Generate usage statistics from query history. Requires include_query_lineage to be enabled. Collects metrics like unique user counts, query frequencies, and column access patterns. Statistics appear in DataHub UI under the Dataset Profile > Usage tab.
Default: False
include_view_column_lineage
boolean
Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires include_view_lineage to be enabled.
Default: True
include_view_lineage
boolean
Populates view->view and table->view lineage using DataHub's sql parser.
Default: True
include_views
boolean
Whether views should be ingested.
Default: True
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
initial_database
string
Initial database used to query for the list of databases, when ingesting multiple databases. Note: this is not used if database or sqlalchemy_uri are provided.
Default: postgres
max_queries_to_extract
integer
Maximum number of queries to extract from pg_stat_statements for lineage analysis. Queries are prioritized by execution time and frequency.
Default: 1000
min_query_calls
integer
Minimum number of executions required for a query to be included. Set higher to focus on frequently-used queries.
Default: 1
options
object
Any options specified here will be passed to SQLAlchemy.create_engine as kwargs. To set connection arguments in the URL, specify them under connect_args.
password
One of string(password), null
password
Default: None
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
scheme
string
database scheme
Default: postgresql+psycopg2
sqlalchemy_uri
One of string, null
URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters.
Default: None
start_time
string(date-time)
Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.
Default: None
top_n_queries
integer
Number of top queries to save to each table.
Default: 10
use_file_backed_cache
boolean
Whether to use a file backed cache for the view definitions.
Default: True
username
One of string, null
username
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
aws_config
AwsConnectionConfig
Common AWS credentials config.

Currently used by:
- Glue source
- SageMaker source
- dbt source
aws_config.aws_access_key_id
One of string, null
AWS access key ID. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_config.aws_advanced_config
object
Advanced AWS configuration options. These are passed directly to botocore.config.Config.
aws_config.aws_endpoint_url
One of string, null
The AWS service endpoint. This is normally constructed automatically, but can be overridden here.
Default: None
aws_config.aws_profile
One of string, null
The named profile to use from AWS credentials. Falls back to default profile if not specified and no access keys provided. Profiles are configured in ~/.aws/credentials or ~/.aws/config.
Default: None
aws_config.aws_proxy
One of string, null
A set of proxy configs to use with AWS. See the botocore.config docs for details.
Default: None
aws_config.aws_region
One of string, null
AWS region code.
Default: None
aws_config.aws_retry_mode
Enum
One of: "legacy", "standard", "adaptive"
Default: standard
aws_config.aws_retry_num
integer
Number of times to retry failed AWS requests. See the botocore.retry docs for details.
Default: 5
aws_config.aws_secret_access_key
One of string, null
AWS secret access key. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_config.aws_session_token
One of string, null
AWS session token. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_config.read_timeout
number
The timeout for reading from the connection (in seconds).
Default: 60
aws_config.aws_role
One of string, array, null
AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are the same as boto3's STS.Client.assume_role.
Default: None
aws_config.aws_role.union
One of string, AwsAssumeRoleConfig
aws_config.aws_role.union.RoleArn 
string
ARN of the role to assume.
aws_config.aws_role.union.ExternalId
One of string, null
External ID to use when assuming the role.
Default: None
database_pattern
AllowDenyPattern
A class to store allow deny regexes
database_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain
map(str,AllowDenyPattern)
A class to store allow deny regexes
domain.key.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
domain.key.allow.string
string
domain.key.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain.key.deny
array
List of regex patterns to exclude from ingestion.
Default: []
domain.key.deny.string
string
procedure_pattern
AllowDenyPattern
A class to store allow deny regexes
procedure_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profile_pattern
AllowDenyPattern
A class to store allow deny regexes
profile_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
query_exclude_patterns
One of array, null
SQL LIKE patterns to exclude from query extraction. Example: ['%pgcatalog%', '%temp%'] to exclude catalog and temp tables.
Default: None
query_exclude_patterns.string
string
schema_pattern
AllowDenyPattern
A class to store allow deny regexes
schema_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
table_pattern
AllowDenyPattern
A class to store allow deny regexes
table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
user_email_pattern
AllowDenyPattern
A class to store allow deny regexes
user_email_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
view_pattern
AllowDenyPattern
A class to store allow deny regexes
view_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification
ClassificationConfig
classification.enabled
boolean
Whether classification should be used to auto-detect glossary terms
Default: False
classification.info_type_to_term
map(str,string)
classification.max_workers
integer
Number of worker processes to use for classification. Set to 1 to disable.
Default: 4
classification.sample_size
integer
Number of sample values used for classification.
Default: 100
classification.classifiers
array
Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.
Default: [{'type': 'datahub', 'config': None}]
classification.classifiers.DynamicTypedClassifierConfig
DynamicTypedClassifierConfig
classification.classifiers.DynamicTypedClassifierConfig.type 
string
The type of the classifier to use. For DataHub, use datahub
classification.classifiers.DynamicTypedClassifierConfig.config
One of object, null
The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.
Default: None
classification.column_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.column_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification.table_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profiling
GEProfilingConfig
profiling.catch_exceptions
boolean
Default: True
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.field_sample_values_limit
integer
Upper limit for number of sample values to collect for all columns.
Default: 20
profiling.include_field_distinct_count
boolean
Whether to profile for the number of distinct values for each column.
Default: True
profiling.include_field_distinct_value_frequencies
boolean
Whether to profile for distinct value frequencies.
Default: False
profiling.include_field_histogram
boolean
Whether to profile for the histogram for numeric fields.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_mean_value
boolean
Whether to profile for the mean value of numeric columns.
Default: True
profiling.include_field_median_value
boolean
Whether to profile for the median value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.include_field_quantiles
boolean
Whether to profile for the quantiles of numeric columns.
Default: False
profiling.include_field_sample_values
boolean
Whether to profile for the sample values for all columns.
Default: True
profiling.include_field_stddev_value
boolean
Whether to profile for the standard deviation of numeric columns.
Default: True
profiling.limit
One of integer, null
Max number of documents to profile. By default, profiles all documents.
Default: None
profiling.max_number_of_fields_to_profile
One of integer, null
A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.
Default: None
profiling.max_workers
integer
Number of worker threads to use for profiling. Set to 1 to disable.
Default: 20
profiling.offset
One of integer, null
Offset in documents to profile. By default, uses no offset.
Default: None
profiling.partition_datetime
One of string(date-time), null
If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this.
Default: None
profiling.partition_profiling_enabled
boolean
Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling.
Default: True
profiling.profile_external_tables
boolean
Whether to profile external tables. Only Snowflake and Redshift supports this.
Default: False
profiling.profile_if_updated_since_days
One of number, null
Profile table only if it has been updated since these many number of days. If set to null, no constraint of last modified time for tables to profile. Supported only in snowflake and BigQuery.
Default: None
profiling.profile_nested_fields
boolean
Whether to profile complex types like structs, arrays and maps.
Default: False
profiling.profile_table_level_only
boolean
Whether to perform profiling at table-level only, or include column-level profiling as well.
Default: False
profiling.profile_table_row_count_estimate_only
boolean
Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL.
Default: False
profiling.profile_table_row_limit
One of integer, null
Profile tables only if their row count is less than specified count. If set to null, no limit on the row count of tables to profile. Supported only in Snowflake, BigQuery. Supported for Oracle based on gathered stats.
Default: 5000000
profiling.profile_table_size_limit
One of integer, null
Profile tables only if their size is less than specified GBs. If set to null, no limit on the size of tables to profile. Supported only in Snowflake, BigQuery and Databricks. Supported for Oracle based on calculated size from gathered stats.
Default: 5
profiling.query_combiner_enabled
boolean
This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.
Default: True
profiling.report_dropped_profiles
boolean
Whether to report datasets or dataset columns which were not profiled. Set to True for debugging purposes.
Default: False
profiling.sample_size
integer
Number of rows to be sampled from table for column level profiling.Applicable only if use_sampling is set to True.
Default: 10000
profiling.turn_off_expensive_profiling_metrics
boolean
Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.
Default: False
profiling.use_sampling
boolean
Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables.
Default: True
profiling.operation_config
OperationConfig
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
One of integer, null
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.operation_config.profile_day_of_week
One of integer, null
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.tags_to_ignore_sampling
One of array, null
Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on use_sampling.
Default: None
profiling.tags_to_ignore_sampling.string
string
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Code Coordinates

  • Class Name: datahub.ingestion.source.sql.postgres.source.PostgresSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Postgres, feel free to ping us on our Slack.