cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Shiva3
by New Contributor III
  • 1594 Views
  • 2 replies
  • 1 kudos

Resolved! In Unity Catalog repartition method issue

We are in the process of upgrading our notebooks to Unity Catalog. Previously, I was able to write data to an external Delta table using df.repartition(8).write. Save('path'), which correctly created multiple files. However, during the upgrade, in te...

  • 1594 Views
  • 2 replies
  • 1 kudos
Latest Reply
agallard
Contributor
  • 1 kudos

Hi @Shiva3,Maybe you can try this option in Delta Lake in Unity Catalog may have optimizedWrites enabled by default, which can reduce the number of files by automatically coalescing partitions during writes. # Disable auto-compaction and optimized wr...

  • 1 kudos
1 More Replies
BS_THE_ANALYST
by Databricks Partner
  • 1392 Views
  • 2 replies
  • 5 kudos

Resolved! Databricks Docs removed/hidden File Metadata documentation?

Hey everyone, Hopefully this is a quick one to resolve (and it's probably me being behind-the-times or slightly stupid ). I've been looking at getting metadata into my SQL query (when I'm ingesting files). This article is fantastic for solving this v...

BS_THE_ANALYST_0-1756063724389.png
  • 1392 Views
  • 2 replies
  • 5 kudos
Latest Reply
WiliamRosa
Databricks Partner
  • 5 kudos

Hi Bro!Yes — this page doesn’t show up in search because it’s marked Unlisted, so it’s only available to people with the direct link (or via a few internal links). You can confirm this by viewing the page source and searching for “noindex”, as shown ...

  • 5 kudos
1 More Replies
divyab7
by New Contributor III
  • 1786 Views
  • 5 replies
  • 2 kudos

Resolved! Access task level parameters along with parameters passed by airflow job

I have a airflow DAG which calls databricks job that has a task level parameters defined as job_run_id (job.run_id) and has a type as python_script. When I try to access it using sys.argv and spark_python_task, it only prints the json that has passed...

  • 1786 Views
  • 5 replies
  • 2 kudos
Latest Reply
Isi
Honored Contributor III
  • 2 kudos

Hey @divyab7 Sorry, now I understand better what you actually need. I got confused at first and thought you only wanted to access the parameters you pass through Airflow.I think the dynamic identifiers that Databricks generates at runtime (like run I...

  • 2 kudos
4 More Replies
hiryucodes
by Databricks Employee
  • 3138 Views
  • 6 replies
  • 4 kudos

ModuleNotFound when running DLT pipeline

My new DLT pipeline gives me a ModuleNotFound error when I try to request data from an API. For some more context, I develop in my local IDE and then deploy to databricks using asset bundles. The pipeline runs fine if I try to write a static datafram...

  • 3138 Views
  • 6 replies
  • 4 kudos
Latest Reply
AFH
New Contributor II
  • 4 kudos

Same problem here!

  • 4 kudos
5 More Replies
Firehose74
by New Contributor III
  • 2817 Views
  • 1 replies
  • 0 kudos

Duplicates detected in transformed data - Help with troubleshooting

HelloCan anyone help with an error I am getting when running ADF. An ingestion pipeline fails and when I click on the link I am taken to a Databricks error message "7 duplicates detected in transformed data". However, when I run the transformation ce...

  • 2817 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

Hi @Firehose74 , This may need a deeper investigation and require workspace access to troubleshoot/review logs. Can you please raise a ticket with us?

  • 0 kudos
Sadam97
by New Contributor III
  • 1016 Views
  • 1 replies
  • 0 kudos

cancel running job kill the parent process and does not wait for streamings to stop

Hi,We have created databricks jobs and each has multiple tasks. Each task is 24/7 running streaming with checkpoint enabled. We want it to be stateful when cancel and run the job but it seems like, when we cancel the job run it kill the parent proces...

  • 1016 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

Hi @Sadam97 , This seems to be expected behaviour. If you are running the jobs in a job cluster: In job clusters, the Databricks job scheduler treats all streaming queries within a task as belonging to the same job execution context. If any query fai...

  • 0 kudos
mkEngineer
by New Contributor III
  • 1204 Views
  • 6 replies
  • 2 kudos

How to preserve job run history when deploying with DABs

HiI’m having an issue when deploying jobs with DABs. Each time I deploy changes, the existing job gets overwritten,  the job name stays the same, but a new job ID is created. This causes the history of past runs to be lost.Ideally, I’d like to update...

  • 1204 Views
  • 6 replies
  • 2 kudos
Latest Reply
Coffee77
Honored Contributor II
  • 2 kudos

Despite using different keys but same names, original jobs should remain indeed unless destroying them

  • 2 kudos
5 More Replies
echozhuoocl
by New Contributor II
  • 632 Views
  • 2 replies
  • 0 kudos

delta sharing presigned url was removed, what should I do?

Caused by: java.lang.IllegalStateException: table s3a://dmsa/tmp/the_credential_of_deltasharing/on_prem_deltasharing.share#on-prem-delta-sharing.dmsa_in_nrt.shp_rating_snapshot was removed   at org.apache.spark.delta.sharing.CachedTableManager.getPre...

  • 632 Views
  • 2 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @echozhuoocl ,Did you VACCUM your table? If you're not sure, run:DESCRIBE HISTORY catalog.schema.table

  • 0 kudos
1 More Replies
Puru20
by New Contributor III
  • 1206 Views
  • 3 replies
  • 6 kudos

Resolved! Pass the job even if specific task fails

Hi , I have multiple data pipelines and each has data quality check as a final task which runs on dbt. There are 1500 test cases altogether runs everyday which is being captured on dashboard. Is there a way to pass the job even if this particular tal...

  • 1206 Views
  • 3 replies
  • 6 kudos
Latest Reply
Puru20
New Contributor III
  • 6 kudos

Hi @szymon_dybczak  The solution works perfectly when I set leaf job to pass irrespective of dbt test task status. Thanks much!

  • 6 kudos
2 More Replies
ismaelhenzel
by Contributor III
  • 852 Views
  • 1 replies
  • 0 kudos

Schema Evolution/Type Widening in Materialized Views

My team is migrating pipelines from Spark to Delta Live Tables (DLT), but we've found that some important features, like schema evolution for tables with enforced schemas, seem to be missing. In DLT, we can define schemas, set primary and foreign key...

  • 852 Views
  • 1 replies
  • 0 kudos
Latest Reply
nayan_wylde
Esteemed Contributor II
  • 0 kudos

DLT supports schema evolution, but changing column data types (like from DECIMAL(10,5) to DECIMAL(11,5)) is not automatically handled. Here's how you can manage it:Option 1: Full Refresh with Schema UpdateIf you're okay with refreshing the materializ...

  • 0 kudos
zc
by New Contributor III
  • 7423 Views
  • 9 replies
  • 7 kudos

Resolved! Use Array in WHERE IN clause

This is what I'm trying to do using SQL: create table check1 asselect * from dataAwhere IDs in ('12483258','12483871','12483883'); The list of IDs is much longer and may be changed so I want to use a variable for that. This is what I have tried decla...

  • 7423 Views
  • 9 replies
  • 7 kudos
Latest Reply
BS_THE_ANALYST
Databricks Partner
  • 7 kudos

Nice solutions! @ManojkMohan @WiliamRosa I love the use of the temp view for the intermediate result. The array_contains is also a really nice touch. @ManojkMohan when you write "SET VARIABLE ids = ARRAY('12483258','12483871','12483883');" ... can th...

  • 7 kudos
8 More Replies
Rainier_dw
by Databricks Partner
  • 2510 Views
  • 6 replies
  • 6 kudos

Resolved! Rollbacks/deletes on streaming table

Hi all — I’m running a Medallion streaming pipeline on Databricks using DLT (bronze → staging silver view → silver table). I ran into an issue and would appreciate any advice or best practices.What I’m doingIngesting streaming data into a streaming b...

  • 2510 Views
  • 6 replies
  • 6 kudos
Latest Reply
dalcuovidiu
New Contributor III
  • 6 kudos

I'm not entirely sure if I’m missing something here, but as far as I know there’s a golden rule in DWH applications: you never hard delete records, you use soft deletes instead. So I’m a bit puzzled why a hard delete is being used in this case.

  • 6 kudos
5 More Replies
ChingizK
by New Contributor III
  • 3431 Views
  • 5 replies
  • 2 kudos

Exclude a job from bundle deployment in PROD

My question is regarding Databricks Asset Bundles. I have defined a databricks.yml file the following way: bundle: name: my_bundle_name include: - resources/jobs/*.yml targets: dev: mode: development default: true workspace: ...

  • 3431 Views
  • 5 replies
  • 2 kudos
Latest Reply
Coffee77
Honored Contributor II
  • 2 kudos

Me too. No clean solution yet. As workaround I implemented first an "extra" control in specific jobs that never should be run in PROD to block execution based on environment variable in all clusters (I don't really like much but it was effective). As...

  • 2 kudos
4 More Replies
Datalight
by Contributor
  • 2089 Views
  • 10 replies
  • 3 kudos

Resolved! High Level Design for Transfer Data from One Databricks account to Another databricks account

Hi,May someone please help me with only Points which should be part of High Level Design and Low Level Design when transfering Data from One Databricks account to Another databricks account using Unity Catalog. First time full data transfer and than ...

  • 2089 Views
  • 10 replies
  • 3 kudos
Latest Reply
Coffee77
Honored Contributor II
  • 3 kudos

Based on my previous reply, you can use DEEP CLONE to clone data incrementally between workspaces by including it in a scheduled job but this will not work in real time indeed.

  • 3 kudos
9 More Replies
dalcuovidiu
by New Contributor III
  • 3526 Views
  • 11 replies
  • 10 kudos

DLT - SCD 2 - detect deletes

Hello,I have a question related to APPLY AS DELETE WHEN...If the source table does not have a column that specifies whether a record was deleted, I am currently using a workaround by ingesting synthetic data with a soft_deletion flag. In the future, ...

  • 3526 Views
  • 11 replies
  • 10 kudos
Latest Reply
dalcuovidiu
New Contributor III
  • 10 kudos

ok. In my case I am qualified for: Incremental without a delete flag (classic case)Generate synthetic tombstones via an anti-join between the current set of keys and the target’s active keys.I don't want to use Merge, that's why my question was for C...

  • 10 kudos
10 More Replies
Labels