cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Governance
Join discussions on data governance practices, compliance, and security within the Databricks Community. Exchange strategies and insights to ensure data integrity and regulatory compliance.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Schema Changes to External table

Dp15
Contributor

Hi,
I have an external table which reads data from a S3 bucket. The s3 bucket is expected to get new files frequently. With some changes to the underlying schema. I used Refresh Table table command to load new files from the s3 location and it worked fine. But when there are schema changes, either addition or deletion, the refresh table is not working,

Is it possible to refresh the metadata of the external table when there are schema changes? Should I alter the table each time there are changes to the schema? Could someone please help?

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

shan_chandra
Esteemed Contributor
Esteemed Contributor

@Dp15 - yes you are correct. Dropping a column from an managed table in Databricks works different from the external table(as the schema is inferred by the underlying source). Below hack can help. AFAIK. Please let me know if this works for you. 

1. create or replace new external table B on the new schema (new set of columns you want to keep) and new data source path
2. insert into new table B as select (required columns) from table A(old table). 
3. Drop table A 
4. Alter table - Rename table B to table A

View solution in original post

8 REPLIES 8

shan_chandra
Esteemed Contributor
Esteemed Contributor

@Dp15  -  Please refer to the below illustration.

  • Considering a streaming pipeline from table A to table B
  • Without column mapping:
    1. you can only ADD COLUMN on table A
    2. If spark.databricks.delta.schema.autoMerge.enabled is set
    3. Any addition of columns to A (say c )would cause c to be added to table B as well (assuming no transformation or filtering)
    4. This is considered safe because adding column to a table won't cause data miss or duplication, e.g. a select * from table B now will just show nulls for all historical data in B
    5. If ^ is not set: the stream would fail anyway
  • With column mapping:
    1. For ADD COLUMN, same story. So when you use schemaTrackingLocation and ADD COLUMN, we won't require you to enter the allowDropOrRenameColumn SQL conf.

Please refer to the below doc for additional details - https://docs.databricks.com/en/delta/delta-column-mapping.html#streaming-with-column-mapping-and-sch...

 

HI @shan_chandra how about deletions from the external location? And what if I am not using a streaming table? 

 

 

 

shan_chandra
Esteemed Contributor
Esteemed Contributor

@Dp15 - you can drop column manually using the below 

ALTER TABLE table_name DROP COLUMN col_name

1. Please note dropping a column from a metadata does not delete the underlying data for column in files.

2. Purging the data column can be done using REORG TABLE to rewrite files.

3. Use VACUUM to physically delete the files that contain the dropped column data.

Reference:

https://docs.databricks.com/en/delta/delta-column-mapping.html#drop-columns

https://docs.databricks.com/en/delta/update-schema.html#explicitly-update-schema-to-drop-columns

 

Hi @shan_chandra This drop works for a delta table which is managed table, however it does work for an external table, I am looking specifically for schema changes in external table, now a  refresh might work to load new metadata in the external table, however when there are schema modifications, only addition of columns are possible dropping a column has not worked for me, Correct me if I am wrong here

shan_chandra
Esteemed Contributor
Esteemed Contributor

@Dp15 - yes you are correct. Dropping a column from an managed table in Databricks works different from the external table(as the schema is inferred by the underlying source). Below hack can help. AFAIK. Please let me know if this works for you. 

1. create or replace new external table B on the new schema (new set of columns you want to keep) and new data source path
2. insert into new table B as select (required columns) from table A(old table). 
3. Drop table A 
4. Alter table - Rename table B to table A

@shan_chandra This worked thank you

Kaniz_Fatma
Community Manager
Community Manager

Hi @Dp15, Thank you for posting your question in our community! We are happy to assist you.

To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance! 
 

shan_chandra
Esteemed Contributor
Esteemed Contributor

@Dp15 - I am glad it worked. Happy to help!!!

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!