cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Data persistence, Dataframe, and Delta

Vik1
New Contributor II

I am new to databricks platform.

  1. what is the best way to keep data persistent so that once I restart the cluster I don't need to run all the codes again?So that I can simply continue developing my notebook with the cached data.
  2. I have created many dataframes and I want to save them as Delta table using the code
dataFrame.to_delta('/dbfs/Projects/', index_col='index')
  1. then I list the table using the command I see a table with two columns: path, and name. The path column contains the path starting from dbfs:/dbfs/Projects/part-00000-xxxx-snappy.parquet. The name column has only the filename part. How will I later query those two tables if the dataframe name is not saved with the filename. Do I have to query by the extremely long filename.?

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

you can just use spark.read.format("delta").load("path to the parent folder of 'delta_log'-folder")

or save it as a table and read that table.

https://docs.microsoft.com/en-us/azure/databricks/delta/quick-start

View solution in original post

4 REPLIES 4

Anonymous
Not applicable

Hi @Vivek Ranjan​! My name is Piper, and I'm a moderator for the community. Welcome to Databricks and the community! Thank you for your question. We give our members time to respond to questions before we circle back.

Thanks in advance for your patience and best wishes on your Databricks journey.

-werners-
Esteemed Contributor III

you can just use spark.read.format("delta").load("path to the parent folder of 'delta_log'-folder")

or save it as a table and read that table.

https://docs.microsoft.com/en-us/azure/databricks/delta/quick-start

Anonymous
Not applicable

@Vivek Ranjan​ - Does werners' response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?

Anonymous
Not applicable

Hey there @Vivek Ranjan​ 

Hope you are doing great!

Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.

Thanks!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.