cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Ajay-Pandey
by Esteemed Contributor III
  • 1716 Views
  • 1 replies
  • 5 kudos

Notebook cell output results limit increased- 10,000 rows or 2 MB. Hi all, Now, databricks start showing the first 10000 rows instead of 1000 rows.Tha...

Notebook cell output results limit increased- 10,000 rows or 2 MB.Hi all,Now, databricks start showing the first 10000 rows instead of 1000 rows.That will reduce the time of re-execution while working on fewer sizes of data that have rows between 100...

  • 1716 Views
  • 1 replies
  • 5 kudos
Latest Reply
F_Goudarzi
New Contributor III
  • 5 kudos

Hi Ajay,Is there any way to increase this limit?Thanks, Fatima

  • 5 kudos
whleeman
by New Contributor III
  • 913 Views
  • 1 replies
  • 0 kudos

How to get the table reference of cached query results?

Databricks caches query results for 24 hours. I would like to access the query results as if it is a table so that I can post process it. For example, do another query against it. The ask is similar to Snowflake RESULT_SCAN https://docs.snowflake.com...

  • 913 Views
  • 1 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @MAN LI​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.

  • 0 kudos
Digan_Parikh
by Valued Contributor
  • 11061 Views
  • 2 replies
  • 3 kudos

Resolved! Default Query Limit 1000

By default, we return back up to 1000 query results when a user runs a cell in Databricks. E.g., if you run display(storeData) and you have ten million customers, the UI will show the first 1000 results. If you graph that by age of customer, similarl...

  • 11061 Views
  • 2 replies
  • 3 kudos
Latest Reply
User16805453151
New Contributor III
  • 3 kudos

This is simple in Databricks SQL, just uncheck LIMIT 1000 in the drop down.

  • 3 kudos
1 More Replies
User16790091296
by Contributor II
  • 918 Views
  • 1 replies
  • 0 kudos

How do we get logs on read queries from delta lake in Databricks?

I've tried with :df.write.mode("overwrite").format("com.databricks.spark.csv").option("header","true").csv(dstPath)anddf.write.format("csv").mode("overwrite").save(dstPath)but now I have 10 csv files but I need one file and name it.

  • 918 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 0 kudos

The header question seems different than your body question. I am assuming that you are asking how to only get a single CSV file when writing? To do so you should use the coalesce:df.coalesce(1).write.format("csv").mode("overwrite").save(dstPath)This...

  • 0 kudos
Labels