cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16826992666
by Valued Contributor
  • 2268 Views
  • 1 replies
  • 0 kudos

What is the default location where dataframes are written if I don't specify a location?

If I save a dataframe without specifying a location, where will it end up?

  • 2268 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

You cant save a dataframe without specifying a location. If you are using saveAsTable API then the table will be created in the hive warehouse location. The default location is user.hive.warehouse

  • 0 kudos
User16826992666
by Valued Contributor
  • 1905 Views
  • 1 replies
  • 2 kudos

Why would I make a deep clone of a Delta table vs reading the table and writing a copy to a new location?

It seems like with both techniques I would end up with a copy of my table. Trying to understand when I should be using a deep clone.

  • 1905 Views
  • 1 replies
  • 2 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 2 kudos

A deep clone is recommended way as it holds the history of the table. Also, the DEEP clone is faster than the read-write approach.

  • 2 kudos
User16826992666
by Valued Contributor
  • 2178 Views
  • 1 replies
  • 0 kudos

How can I run OPTIMIZE on a table if I am streaming to it 24/7?

I have a table that I need to be continuously streaming into. I know it's best practice to run Optimize on my tables periodically. But if I never stop writing to the table, how and when can I run OPTIMIZE against it?

  • 2178 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

If the streaming job is making bling appends to the delta table, then it's perfectly fine to run OPTIMIZE query in parallel.However, if the streaming job is performing MERGE or UPDATE then it can conflict with the OPTIMIZE operations. In such cases w...

  • 0 kudos
User16826987838
by Contributor
  • 2328 Views
  • 1 replies
  • 0 kudos
  • 2328 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

delta.logRetentionDuration - 30 daysdelta.deletedFileRetentionDuration - 7 days

  • 0 kudos
brickster_2018
by Databricks Employee
  • 1179 Views
  • 1 replies
  • 0 kudos

Resolved! Best practices for DStream application in Databricks

I do not see any best practice guide for the DStream application in Databricks docs. Any reference

  • 1179 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

Dstream is unsupported by Databricks. Databrcks strongly recommend migrating the Dstream applications to use Structured Streaminghttps://kb.databricks.com/streaming/dstream-not-supported.html

  • 0 kudos
brickster_2018
by Databricks Employee
  • 1122 Views
  • 1 replies
  • 1 kudos

Optimize Command not performing the bin packing

I have a daily OPTIMIZE job running, however, the number of files in the storage is not going down. Looks like the optimize is not helping to reduce the files.

  • 1122 Views
  • 1 replies
  • 1 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 1 kudos

The files are not physically removed from the Storage by the optimize command. A VACUUM command has to be executed to achieve the same

  • 1 kudos
User16790091296
by Contributor II
  • 16110 Views
  • 1 replies
  • 0 kudos

How to run multiple spark streaming application on databricks cluster?

I started working on databricks. I need to migrate few streaming jobs from Ambari to Databricks. I deployed one job using jar and it. is working fine. But when I deploy the second job I faced an error " multiple spark streaming context not allowed". ...

  • 16110 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

You can run multiple streaming applications in databricks clusters. By default, this would run in the same fair scheduling pool. To enable multiple streaming queries to execute jobs concurrently and to share the cluster efficiently, you can set the q...

  • 0 kudos
MoJaMa
by Databricks Employee
  • 1179 Views
  • 1 replies
  • 0 kudos
  • 1179 Views
  • 1 replies
  • 0 kudos
Latest Reply
MoJaMa
Databricks Employee
  • 0 kudos

We still require a single user to be an owner. But you can set a group to have CAN_MANAGE which unblocks most of the necessary updates. It is released in all Premium workspaces that have Jobs ACLs. The official OWNER is whose identity is used to crea...

  • 0 kudos
User16826992666
by Valued Contributor
  • 1327 Views
  • 0 replies
  • 0 kudos

If I write functionally equivalent code in Pyspark and Koalas, will they end up evaluating to the same execution plan?

I am wondering how similar the backend execution of the two API's are. If I have code that does the same operations written in both styles, is there any functional difference between them when it comes to the execution?

  • 1327 Views
  • 0 replies
  • 0 kudos
MoJaMa
by Databricks Employee
  • 1403 Views
  • 1 replies
  • 1 kudos
  • 1403 Views
  • 1 replies
  • 1 kudos
Latest Reply
MoJaMa
Databricks Employee
  • 1 kudos

Only HTTPS is supported right now.If SSH is required for your use case, please let your Databricks Rep know and reference the Idea DB-I-3697 so that it can be prioritized.

  • 1 kudos
MoJaMa
by Databricks Employee
  • 1872 Views
  • 1 replies
  • 0 kudos
  • 1872 Views
  • 1 replies
  • 0 kudos
Latest Reply
MoJaMa
Databricks Employee
  • 0 kudos

You can clone any repo, the security concern is usually around proprietary code exfiltration, whether intentional or accidental.

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels