cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

CRAS in @dlt

rt-slowth
Contributor

The Delta Table created as a result of the Dataframe returned by @dlt.create_table is confirmed to be overwritten when checked with the DECREASE HISTORY command.
I want this to be handled as a CRAS, or CREATE AS SELECT, but how can I do this in python code?

 

1 ACCEPTED SOLUTION

Accepted Solutions

Hi @rt-slowthHere is an example of how you can do this:

python
from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()

# Assume df is your DataFrame
df = spark.sql("SELECT * FROM your_table")

# Write DataFrame to a new Delta table
df.write.format("delta").save("/path/to/new/delta/table")

In this example, replace "SELECT * FROM your_table" your SELECT query "/path/to/new/delta/table" with the exact path where you want to create the new Delta table. Please note that the DataFrame API's write method will overwrite the existing data at the specified path. If you want to append the data instead of overwriting, you can use the mode a method like this: df.write.format("delta").mode("append").save("/path/to/new/delta/table").

View solution in original post

5 REPLIES 5

Kaniz_Fatma
Community Manager
Community Manager

Hi @rt-slowth , In Python, you can use the DeltaTableBuilder and DeltaColumnBuilder APIs to create new Delta tables programmatically, similar to a SQL CREATE TABLE AS SELECT (CTAS) statement.

rt-slowth
Contributor

@Kaniz_Fatma 

If you don't mind me asking, can I see the code or documentation for this? I searched for DelataTableBuilder and only found Scala related stuff.

Hi @rt-slowthHere is an example of how you can do this:

python
from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()

# Assume df is your DataFrame
df = spark.sql("SELECT * FROM your_table")

# Write DataFrame to a new Delta table
df.write.format("delta").save("/path/to/new/delta/table")

In this example, replace "SELECT * FROM your_table" your SELECT query "/path/to/new/delta/table" with the exact path where you want to create the new Delta table. Please note that the DataFrame API's write method will overwrite the existing data at the specified path. If you want to append the data instead of overwriting, you can use the mode a method like this: df.write.format("delta").mode("append").save("/path/to/new/delta/table").

Hi @rt-slowth, To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions. Thank you for your participation, and let us know if you need further assistance! 
 

siddhathPanchal
Contributor

Hi @rt-slowth You can review this open source code base of Delta to know more about the DeltaTableBuilder's implementation in Python. 

https://github.com/delta-io/delta/blob/master/python/delta/tables.py

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group