cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Delta Live Table name dynamically

Phani1
Valued Contributor

Hi Team,

Can we pass Delta Live Table name dynamically [from a configuration file, instead of hardcoding the table name]? We would like to build a metadata-driven pipeline.

7 REPLIES 7

Hubert-Dudek
Esteemed Contributor III

Yes, it is possible. Just pass the variable to @dlt.table(name=variable)

for name in ['table1', 'table2']:
   @dlt.table(name=name)
   def delta_live_table():
      return (
         spark.range(1, 10)
       )

Phani1
Valued Contributor

Thanks, @Hubert Dudek for your quick response on this, I can able to create DLT dynamically.

Can we pass the Database name while creating DLT tables instead of passing the database name in the pipeline configuration?

Error message :

org.apache.spark.sql.AnalysisException: Materializing tables in custom schemas is not supported. Please remove the database qualifier from table 'default.Delta_table3'.

DanR
New Contributor II

I hope this limitation is resolved - storing everything from one pipeline in a single database is not ideal. Preferably I'd like to be able to store bronze level data in it's own "Database" rather than mix with silver/gold.

Noopur_Nigam
Valued Contributor II
Valued Contributor II

Hi @Dan Richardson​ There is a feature request for this limitation already in queue.This is the feature request ID: DB-I-5073. We do not have any ETA on it yet and will be implemented once prioritized . Please note that you won't be able to access the feature request as it is internal to Databricks, however you can always follow-up with above ID for the status update on this.

Hi @Dan Richardson​,

Just a friendly follow-up. do you have any follow-up questions or did Noopur's response helped you? please let us know

cpayne_vax
New Contributor III

Hi, have there been any updates on this feature or internal ticket? This would be a great addition. Thanks!

Azure_dbks_eng
New Contributor II

I am observing same error while I adding dataset.tablename. 

org.apache.spark.sql.catalyst.ExtendedAnalysisException: Materializing tables in custom schemas is not supported. Please remove the database qualifier from table 'streaming.dlt_read_test_files'

@Dlt.table(name="streaming.dlt_read_test_files")
def raw_data():
  return spark.readStream.format("delta").load(abfss_location)

@dlt.table(name="streaming.dlt_clean_test_files")
def filtered_data():
  return dlt.readStream("streaming.dlt_read_test_files").select(F.col("data"))


 Do we have update on this topic?

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.