cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

whleeman
by New Contributor III
  • 551 Views
  • 1 replies
  • 0 kudos

How to get the table reference of cached query results?

Databricks caches query results for 24 hours. I would like to access the query results as if it is a table so that I can post process it. For example, do another query against it. The ask is similar to Snowflake RESULT_SCAN https://docs.snowflake.com...

  • 551 Views
  • 1 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @MAN LI​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.

  • 0 kudos
Ajay-Pandey
by Esteemed Contributor III
  • 1018 Views
  • 1 replies
  • 5 kudos

Notebook cell output results limit increased- 10,000 rows or 2 MB. Hi all, Now, databricks start showing the first 10000 rows instead of 1000 rows.Tha...

Notebook cell output results limit increased- 10,000 rows or 2 MB.Hi all,Now, databricks start showing the first 10000 rows instead of 1000 rows.That will reduce the time of re-execution while working on fewer sizes of data that have rows between 100...

  • 1018 Views
  • 1 replies
  • 5 kudos
Latest Reply
Kaniz
Community Manager
  • 5 kudos

Thank you @Ajay Pandey​ for sharing the good news with your peers.

  • 5 kudos
Digan_Parikh
by Valued Contributor
  • 6744 Views
  • 2 replies
  • 3 kudos

Resolved! Default Query Limit 1000

By default, we return back up to 1000 query results when a user runs a cell in Databricks. E.g., if you run display(storeData) and you have ten million customers, the UI will show the first 1000 results. If you graph that by age of customer, similarl...

  • 6744 Views
  • 2 replies
  • 3 kudos
Latest Reply
User16805453151
New Contributor III
  • 3 kudos

This is simple in Databricks SQL, just uncheck LIMIT 1000 in the drop down.

  • 3 kudos
1 More Replies
User16790091296
by Contributor II
  • 654 Views
  • 1 replies
  • 0 kudos

How do we get logs on read queries from delta lake in Databricks?

I've tried with :df.write.mode("overwrite").format("com.databricks.spark.csv").option("header","true").csv(dstPath)anddf.write.format("csv").mode("overwrite").save(dstPath)but now I have 10 csv files but I need one file and name it.

  • 654 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Honored Contributor III
  • 0 kudos

The header question seems different than your body question. I am assuming that you are asking how to only get a single CSV file when writing? To do so you should use the coalesce:df.coalesce(1).write.format("csv").mode("overwrite").save(dstPath)This...

  • 0 kudos
Labels