cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

jose_gonzalez
by Databricks Employee
  • 1450 Views
  • 1 replies
  • 0 kudos

how often should I vacuum my Delta table?

I would like to know how often do I need to vacuum my delta table to clean old files?

  • 1450 Views
  • 1 replies
  • 0 kudos
Latest Reply
RonanStokes_DB
Databricks Employee
  • 0 kudos

The requirements for Vacuum will depend on your application needs and the rate of arrival of new data. Vacuuming removes old versions of data.If you need to be able to query earlier versions of data many months after the original ingest time, then i...

  • 0 kudos
jose_gonzalez
by Databricks Employee
  • 2239 Views
  • 2 replies
  • 0 kudos

how to partition my Delta table?

I would like to follow best practices to partition my Delta table. Should I partition by unique ID or date?

  • 2239 Views
  • 2 replies
  • 0 kudos
Latest Reply
RonanStokes_DB
Databricks Employee
  • 0 kudos

Depending on the amount of data per partition - you may also want to consider partitioning by week, month or quarter.The partitioning decision is often tied to the tiering model of data storage. For a Bronze ingest layer, the optimal partitioning is ...

  • 0 kudos
1 More Replies
ruslan
by Databricks Employee
  • 1130 Views
  • 1 replies
  • 0 kudos

Does Spark Structured Streaming supports `OutputMode.Update` for Delta tables?

Does Spark Structured Streaming supports `OutputMode.Update` for Delta tables?

  • 1130 Views
  • 1 replies
  • 0 kudos
Latest Reply
ruslan
Databricks Employee
  • 0 kudos

Nope, it's not supported, but you could use a MERGE statement inside of a forEachBatch streaming sync Documentation on MERGEhttps://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-merge-into.htmlDocumentation for arbitrary streaming ...

  • 0 kudos
patputnam-db
by Databricks Employee
  • 1535 Views
  • 1 replies
  • 0 kudos

When should Change Data Feed be used?

IHAC who has a Change Data Capture data flowing into a Delta table. They would like to propagate these changes from this table into another table downstream. Is this a good application for using Change Data Feed?

  • 1535 Views
  • 1 replies
  • 0 kudos
Latest Reply
patputnam-db
Databricks Employee
  • 0 kudos

CDF simplifies the process of identifying the set of records that are updated, inserted, or deleted with each version of a Delta table. It helps to avoid having to implement downstream 'custom' filtration to identify these changes. This makes it an i...

  • 0 kudos
User16789201666
by Databricks Employee
  • 1694 Views
  • 1 replies
  • 1 kudos

How to make recursive calls to python/pandas UDF? For example, unzipping arbitrarily nested zip files.

There are files that are zip files and have many zip files within them, many levels. How do you read/parse the content?

  • 1694 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16789201666
Databricks Employee
  • 1 kudos

'tail-recurse' is a python API that can help.

  • 1 kudos
RonanStokes_DB
by Databricks Employee
  • 1308 Views
  • 0 replies
  • 1 kudos

Questions on Bronze / Silver / Gold data set layering

I have a DB-savvy customer who is concerned their silver/gold layer is becoming too expensive.  These layers are heavily denormalized, focused on logical business entities (customers, claims, services, etc), and maintained by MERGEs.  They cannot pre...

  • 1308 Views
  • 0 replies
  • 1 kudos
ruslan
by Databricks Employee
  • 1198 Views
  • 1 replies
  • 0 kudos

Does Delta Live Table supports MERGE?

Does Delta Live Table supports MERGE? 

  • 1198 Views
  • 1 replies
  • 0 kudos
Latest Reply
ruslan
Databricks Employee
  • 0 kudos

Delta Live Table currently does not support MERGE statement. This is work in progress.For now, you could use Structured Streaming + MERGE inside of a forEachBatch()

  • 0 kudos
User16788317466
by Databricks Employee
  • 850 Views
  • 1 replies
  • 0 kudos

When can Horovod be used for an ML problem?

When can Horovod be used for an ML problem?

  • 850 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16788317466
Databricks Employee
  • 0 kudos

Only when you have a gradient-descent problem. Pytorch and Tensorflow are the only candidate frameworks to use here. When using Horovod, start with single node, multi-GPU and measure training performance. If this is not sufficient, look at a multi-no...

  • 0 kudos
User16789201666
by Databricks Employee
  • 1406 Views
  • 1 replies
  • 0 kudos

Resolved! With SQL ACL’s, who can DROP a table?

Can the database owner always drop a table?

  • 1406 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16789201666
Databricks Employee
  • 0 kudos

Table owner or administrator. Before DBR 7.x, the database owner can. As of DBR 7.x, the database owner cannot. This will be changing soon.

  • 0 kudos
Anonymous
by Not applicable
  • 1423 Views
  • 1 replies
  • 1 kudos

What's the best way to develop Apache Spark Jobs from an IDE (such as IntelliJ/Pycharm)?

A number of people like developing locally using an IDE and then deploying. What are the recommended ways to do that with Databricks jobs?

  • 1423 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

The Databricks Runtime and Apache Spark use the same base API. One can create Spark jobs that run locally and have them run on Databricks with all available Databricks features.It is required that one uses SparkSession.builder.getOrCreate() to create...

  • 1 kudos
User16783854357
by New Contributor III
  • 1104 Views
  • 1 replies
  • 1 kudos

How to run a Delta Live Table pipeline with a different runtime?

I would like to run a DLT pipeline with the 8.2 runtime.

  • 1104 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16783854357
New Contributor III
  • 1 kudos

You can add the below JSON property to the Delta Live Table pipeline specification at the parent level:"dbr_version": "8.2"

  • 1 kudos
User16776430979
by New Contributor III
  • 2721 Views
  • 0 replies
  • 0 kudos

How to optimize and convert a Spark DataFrame to Arrow?

Example use case: When connecting a sample Plotly Dash application to a large dataset, in order to test the performance, I need the file format to be in either hdf5 or arrow. According to this doc: Optimize conversion between PySpark and pandas DataF...

  • 2721 Views
  • 0 replies
  • 0 kudos
Anonymous
by Not applicable
  • 1355 Views
  • 1 replies
  • 0 kudos

Resolved! Configuring airflow

Should we create a Databricks user for airflow and generate a personal access token for it? We also have gsuite SSO enabled, does that mean I need to create a gsuite account for the user as well?

  • 1355 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16783855117
Contributor II
  • 0 kudos

I would recommend having the 'user' the Databricks Jobs are triggered by as a dedicated user. This is what I would consider a 'Service Account' and I'll drop a definition for that type of user below.Seeing that you have SSO enabled, I might create th...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels