@MauricioS Great question!
Databricks Delta Live Tables (DLT) pipelines are very flexible, but by default, the target schema specified in the pipeline configuration (such as target or schema) is fixed. That said, you can implement strategies to enable dynamic schema selection based on parameters. One way to achieve this is by using pipeline parameters to dynamically determine the target schema in your code. Here's an example:
import dlt
import pyspark.sql.functions as F
# Retrieve the target schema from pipeline parameters
# Fallback to a default schema if the parameter is not provided
target_schema = spark.conf.get("pipeline.country_schema", "default_schema")
@dlt.table
def my_table():
# Read source data
df = spark.read.format("delta").load("path_to_source_data")
# Add a processed date column
return df.withColumn("processed_date", F.current_date())
# Dynamically create the table in the specified target schema
dlt.create_table(
name=f"{target_schema}.my_table",
comment="Example of dynamic schema selection",
path=f"/mnt/delta/{target_schema}/my_table"
)
This approach lets you configure the pipeline to adjust the target schema dynamically while ensuring that default values and error handling keep it robust. Make sure the parameter (pipeline.country_schema) is defined in the pipeline configuration and that appropriate permissions are in place for all schemas being used.