Skip to main content

Iceberg

Incubating

Important Capabilities

CapabilityStatusNotes
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
DomainsCurrently not supported.
Extract OwnershipAutomatically ingests ownership information from table properties based on user_ownership_property and group_ownership_property.
Partition SupportCurrently not supported.
Platform InstanceOptionally enabled via configuration, an Iceberg instance represents the catalog name where the table is stored.

Integration Details

The DataHub Iceberg source plugin extracts metadata from Iceberg tables stored in a distributed or local file system. Typically, Iceberg tables are stored in a distributed file system like S3 or Azure Data Lake Storage (ADLS) and registered in a catalog. There are various catalog implementations like Filesystem-based, RDBMS-based or even REST-based catalogs. This Iceberg source plugin relies on the pyiceberg library.

CLI based Ingestion

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
catalog 
map(str,object)
group_ownership_property
One of string, null
Iceberg table property to look for a CorpGroup owner. Can only hold a single group value. If property has no value, no owner information will be emitted.
Default: None
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
processing_threads
integer
How many threads will be processing tables
Default: 1
user_ownership_property
One of string, null
Iceberg table property to look for a CorpUser owner. Can only hold a single user value. If property has no value, no owner information will be emitted.
Default: owner
env
string
The environment that all assets produced by this connector belong to
Default: PROD
namespace_pattern
AllowDenyPattern
A class to store allow deny regexes
namespace_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
table_pattern
AllowDenyPattern
A class to store allow deny regexes
table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profiling
IcebergProfilingConfig
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.operation_config
OperationConfig
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
One of integer, null
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.operation_config.profile_day_of_week
One of integer, null
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
Default: None
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Iceberg Stateful Ingestion Config.
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Setting up connection to an Iceberg catalog

There are multiple servers compatible with the Iceberg Catalog specification. DataHub's iceberg connector uses pyiceberg library to extract metadata from them. The recipe for the source consists of 2 parts:

  1. catalog part which is passed as-is to the pyiceberg library and configures the connection and its details (i.e. authentication). The name of catalog specified in the recipe has no consequence, it is just a formal requirement from the library. Only one catalog will be considered for the ingestion.
  2. The remaining configuration consists of parameters, such as env or stateful_ingestion which are standard DataHub's ingestor configuration parameters and are described in the Config Details chapter.

This chapter showcases several examples of setting up connections to an Iceberg catalog, varying based on the underlying implementation. Iceberg is designed to have catalog and warehouse separated, which is reflected in how we configure it. It is especially visible when using Iceberg REST Catalog - which can use many blob storages (AWS S3, Azure Blob Storage, MinIO) as a warehouse.

Note that, for advanced users, it is possible to specify a custom catalog client implementation via py-catalog-impl configuration option - refer to pyiceberg documentation on details.

Glue catalog + S3 warehouse

The minimal configuration for connecting to Glue catalog with S3 warehouse:

source:
type: "iceberg"
config:
env: dev
catalog:
my_catalog:
type: "glue"
s3.region: "us-west-2"
region_name: "us-west-2"

Where us-west-2 is the region from which you want to ingest. The above configuration will work assuming your pod or environment in which you run your datahub CLI is already authenticated to AWS and has proper permissions granted (see below). If you need to specify secrets directly, use the following configuration as the template:

source:
type: "iceberg"
config:
env: dev
catalog:
demo:
type: "glue"
s3.region: "us-west-2"
s3.access-key-id: "${AWS_ACCESS_KEY_ID}"
s3.secret-access-key: "${AWS_SECRET_ACCESS_KEY}"
s3.session-token: "${AWS_SESSION_TOKEN}"

aws_access_key_id: "${AWS_ACCESS_KEY_ID}"
aws_secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
aws_session_token: "${AWS_SESSION_TOKEN}"
region_name: "us-west-2"

This example uses references to fill credentials (either from Secrets defined in Managed Ingestion or environmental variables). It is possible (but not recommended due to security concerns) to provide those values in plaintext, directly in the recipe.

Glue and S3 permissions required

The role used by the ingestor for ingesting metadata from Glue Iceberg Catalog and S3 warehouse is:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["glue:GetDatabases", "glue:GetTables", "glue:GetTable"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:ListBucket", "s3:GetObjectVersion"],
"Resource": [
"arn:aws:s3:::<bucket used by the warehouse>",
"arn:aws:s3:::<bucket used by the warehouse>/*"
]
}
]
}

Iceberg REST Catalog + MinIO

The following configuration assumes MinIO defines authentication using the s3.* prefix. Note the specification of s3.endpoint, assuming MinIO listens on port 9000 at minio-host. The uri parameter points at Iceberg REST Catalog (IRC) endpoint (in this case iceberg-catalog:8181).

source:
type: "iceberg"
config:
env: dev
catalog:
demo:
type: "rest"
uri: "http://iceberg-catalog:8181"
s3.access-key-id: "${AWS_ACCESS_KEY_ID}"
s3.secret-access-key: "${AWS_SECRET_ACCESS_KEY}"
s3.region: "eu-east-1"
s3.endpoint: "http://minio-host:9000"

Iceberg REST Catalog (with authentication) + S3

This example assumes IRC requires token authentication (via Authorization header). There are more options available, see https://py.iceberg.apache.org/configuration/#rest-catalog for details. Moreover, the assumption here is that the environment (i.e. pod) is already authenticated to perform actions against AWS S3.

source:
type: "iceberg"
config:
env: dev
catalog:
demo:
type: "rest"
uri: "http://iceberg-catalog-uri"
token: "token-value"
s3.region: "us-west-2"

Special REST connection parameters for resiliency

Unlike other parameters provided in the dictionary under the catalog key, connection parameter is a custom feature in DataHub, allowing to inject connection resiliency parameters to the REST connection made by the ingestor. connection allows for 2 parameters:

  • timeout is provided as amount of seconds, it needs to be whole number (or null to turn it off)
  • retry is a complex object representing parameters used to create urllib3 Retry object. There are many possible parameters, most important would be total (total retries) and backoff_factor. See the linked docs for the details.
source:
type: "iceberg"
config:
env: dev
catalog:
demo:
type: "rest"
uri: "http://iceberg-catalog-uri"
connection:
retry:
backoff_factor: 0.5
total: 3
timeout: 120

Google BigLake REST Catalog + GCS warehouse

DataHub supports ingesting metadata from Google BigLake via the Iceberg REST Catalog API. BigLake provides unified governance and security for data across data lakes and data warehouses.

PyIceberg 0.9+ natively supports BigLake authentication using Google Cloud's Application Default Credentials (ADC).

Prerequisites

  1. GCP Project with BigLake API enabled:

    gcloud services enable biglake.googleapis.com --project=YOUR_PROJECT_ID
  2. Service Account with required permissions:

    • biglake.catalogs.get
    • biglake.tables.get
    • biglake.tables.list
    • biglake.databases.get
    • biglake.databases.list
    • Storage Object Viewer (for GCS buckets containing Iceberg data)
  3. BigLake Catalog created in your GCP project:

    gcloud alpha biglake catalogs create CATALOG_NAME \
    --location=REGION \
    --project=PROJECT_ID

Configuration

BigLake authentication uses Application Default Credentials (ADC). DataHub provides automatic OAuth scope fixing for seamless integration.

Setup:

export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
export GCP_PROJECT_ID=your-project-id
export GCS_WAREHOUSE_BUCKET=your-bucket-name

Configuration

source:
type: iceberg
config:
env: dev
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: my-project
header.X-Iceberg-Access-Delegation: "" # End-user credentials mode
connection:
timeout: 120
retry:
total: 5
backoff_factor: 0.3

Key Configuration Parameters:

  • auth.type: google - Uses Application Default Credentials
  • auth.google.scopes - OAuth scopes required for BigLake access
  • header.x-goog-user-project - Specifies the GCP project for billing
  • header.X-Iceberg-Access-Delegation: "" - Uses end-user credentials mode

How Authentication Works

When using auth.type: google with explicit scopes:

  1. Discovers credentials using Google Cloud's Application Default Credentials (ADC) chain:

    • Environment Variable: GOOGLE_APPLICATION_CREDENTIALS pointing to service account JSON (most common)
    • gcloud CLI: Credentials from gcloud auth application-default login
    • GCE/GKE Metadata Server: Automatic when running on Google Cloud infrastructure
    • Workload Identity: Automatic when using GKE Workload Identity
  2. Uses explicit OAuth scopes: The auth.google.scopes configuration ensures the correct cloud-platform scope is used for BigLake access

    • PyIceberg passes the scopes directly to google.auth.default()
    • Requires google-auth library to be installed (included in DataHub dependencies)
  3. Handles token refresh: Automatic token refresh with no manual management needed

Using Managed Ingestion with Secrets

For production environments using Managed Ingestion (via the DataHub UI), you can securely store your GCP service account credentials as DataHub secrets instead of using environment variables.

Step 1: Create a Secret in DataHub

  1. Navigate to Settings > Secrets in the DataHub UI
  2. Click Create new secret
  3. Enter a name (e.g., BIGLAKE_SERVICE_ACCOUNT_JSON)
  4. Paste the entire contents of your GCP service account JSON file as the value
  5. Optionally add a description
  6. Click Create

Step 2: Reference the Secret in Your Recipe

Use the ${SECRET_NAME} syntax to reference your secret in the ingestion recipe:

source:
type: iceberg
config:
env: prod
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
credentials_json: ${BIGLAKE_SERVICE_ACCOUNT_JSON}
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: my-project
header.X-Iceberg-Access-Delegation: ""

sink:
type: datahub-rest
config:
server: ${DATAHUB_GMS_URL}
token: ${DATAHUB_GMS_TOKEN}

The secret will be automatically resolved at runtime when the ingestion executes.

Alternative: Using Structured Credentials

You can also use individual secrets for each credential component, which provides better validation:

source:
type: iceberg
config:
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
credentials:
project_id: ${GCP_PROJECT_ID}
private_key_id: ${GCP_PRIVATE_KEY_ID}
private_key: ${GCP_PRIVATE_KEY}
client_email: ${GCP_CLIENT_EMAIL}
client_id: ${GCP_CLIENT_ID}
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: ${GCP_PROJECT_ID}
header.X-Iceberg-Access-Delegation: ""

Create these individual secrets in DataHub:

  • GCP_PROJECT_ID
  • GCP_PRIVATE_KEY_ID
  • GCP_PRIVATE_KEY (the private key value from your service account JSON)
  • GCP_CLIENT_EMAIL
  • GCP_CLIENT_ID

Step 3: Deploy via DataHub UI

  1. Navigate to Ingestion > Create new source
  2. Select Iceberg as the source type
  3. Paste your recipe configuration with secret references
  4. Configure a schedule (optional)
  5. Click Save and Run

Using Vended Credentials (Optional)

Important: Vended credentials require your BigLake catalog to be configured with CREDENTIAL_MODE_SERVICE_ACCOUNT. Most BigLake catalogs use CREDENTIAL_MODE_END_USER by default, which does not support vended credentials.

If you get an error stating "X-Iceberg-Access-Delegation header must not contain vended-credentials when credential mode is CREDENTIAL_MODE_END_USER", your catalog doesn't support this feature. Use the standard configuration with header.X-Iceberg-Access-Delegation: "" instead.

For catalogs that support vended credentials, set header.X-Iceberg-Access-Delegation: vended-credentials:

source:
type: iceberg
config:
env: dev
catalog:
my_biglake_catalog:
type: rest
uri: https://biglake.googleapis.com/iceberg/v1/restcatalog
warehouse: gs://my-bucket
auth:
type: google
google:
scopes:
- https://www.googleapis.com/auth/cloud-platform
header.x-goog-user-project: my-project
header.X-Iceberg-Access-Delegation: vended-credentials # Only for CREDENTIAL_MODE_SERVICE_ACCOUNT

When to use vended credentials:

  • Running ingestion from environments without direct GCS access
  • Implementing fine-grained access control through BigLake
  • Avoiding long-lived service account keys

BigLake will generate short-lived service account token scoped to the specific tables being accessed.

Troubleshooting

Error: "invalid_scope: Invalid OAuth scope or ID token audience provided"

This error occurs when OAuth scopes are not properly configured. To fix:

  • Ensure auth.google.scopes is set to ["https://www.googleapis.com/auth/cloud-platform"] in your configuration
  • Verify google-auth library is installed: pip install google-auth
  • Check that GOOGLE_APPLICATION_CREDENTIALS points to a valid service account JSON file

Error: "X-Iceberg-Access-Delegation header must not contain vended-credentials when credential mode is CREDENTIAL_MODE_END_USER"

  • Your BigLake catalog uses end-user credentials mode, which doesn't support vended credentials
  • Solution: Use header.X-Iceberg-Access-Delegation: "" (empty string) in your configuration
  • Vended credentials only work with CREDENTIAL_MODE_SERVICE_ACCOUNT

Error: "Authentication failed"

  • Verify ADC is configured: gcloud auth application-default print-access-token
  • Check service account has required permissions
  • Ensure BigLake API is enabled: gcloud services list --enabled | grep biglake

Error: "Catalog not found"

  • Verify catalog exists: gcloud alpha biglake catalogs list --location=REGION --project=PROJECT
  • Check URI format: https://biglake.googleapis.com/v1/projects/{PROJECT}/locations/{REGION}/catalogs/{CATALOG}

Error: "Permission denied on GCS warehouse"

  • Grant Storage Object Viewer role to service account:
    gsutil iam ch serviceAccount:SA_EMAIL:roles/storage.objectViewer gs://BUCKET_NAME

Error: "User project header required"

  • Ensure header.x-goog-user-project is set to your GCP project ID

SQL catalog + Azure DLS as the warehouse

This example targets Postgres as the sql-type Iceberg catalog and uses Azure DLS as the warehouse.

source:
type: "iceberg"
config:
env: dev
catalog:
demo:
type: sql
uri: postgresql+psycopg2://user:password@sqldatabase.postgres.database.azure.com:5432/icebergcatalog
adlfs.tenant-id: <Azure tenant ID>
adlfs.account-name: <Azure storage account name>
adlfs.client-id: <Azure Client/Application ID>
adlfs.client-secret: <Azure Client Secret>

Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

Source ConceptDataHub ConceptNotes
icebergData Platform
TableDatasetAn Iceberg table is registered inside a catalog using a name, where the catalog is responsible for creating, dropping and renaming tables. Catalogs manage a collection of tables that are usually grouped into namespaces. The name of a table is mapped to a Dataset name. If a Platform Instance is configured, it will be used as a prefix: <platform_instance>.my.namespace.table.
Table propertyUser (a.k.a CorpUser)The value of a table property can be used as the name of a CorpUser owner. This table property name can be configured with the source option user_ownership_property.
Table propertyCorpGroupThe value of a table property can be used as the name of a CorpGroup owner. This table property name can be configured with the source option group_ownership_property.
Table parent folders (excluding warehouse catalog location)ContainerAvailable in a future release
Table schemaSchemaFieldMaps to the fields defined within the Iceberg table schema definition.

Troubleshooting

Exceptions while increasing processing_threads

Each processing thread will open several files/sockets to download manifest files from blob storage. If you experience exceptions appearing when increasing processing_threads configuration parameter, try to increase limit of open files (e.g. using ulimit in Linux).

DataHub Iceberg REST Catalog

DataHub also implements the Iceberg REST Catalog. See the Iceberg Catalog documentation for more details.

Code Coordinates

  • Class Name: datahub.ingestion.source.iceberg.iceberg.IcebergSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Iceberg, feel free to ping us on our Slack.