cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

grazie
by Contributor
  • 4127 Views
  • 4 replies
  • 3 kudos

Do you need to be workspace admin to create jobs?

We're using a setup where we use gitlab ci to deploy workflows using a service principal, using the Jobs API (2.1) https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsCreateWhen we wanted to reduce permissions of the ci to minimu...

  • 4127 Views
  • 4 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Geir Iversen​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers ...

  • 3 kudos
3 More Replies
susmitsircar
by New Contributor III
  • 1573 Views
  • 3 replies
  • 0 kudos

Spark streaming failing intermittently with FileAlreadyExistsException RocksDB checkpointing

We are encountering an issue in our Spark streaming pipeline when attempting to write checkpoint data to S3. The error we are seeing is as follows:25/08/12 13:35:40 ERROR RocksDBFileManager : Error zipping to s3://xxx-datalake-binary/event-types/chec...

  • 1573 Views
  • 3 replies
  • 0 kudos
Latest Reply
lingareddy_Alva
Esteemed Contributor
  • 0 kudos

Hi @susmitsircar Best practices / Fixes1. Clean up the checkpoint directory before restartIf you know the stream can safely start from scratch or reprocess data:Delete the S3 checkpoint path before restarting.This ensures no stale 0.zip files remain....

  • 0 kudos
2 More Replies
susmitsircar
by New Contributor III
  • 2650 Views
  • 7 replies
  • 3 kudos

Resolved! Spark streaming failing intermittently with llegalStateException: Found no SST files

I'm encountering the following error while trying to upload a RocksDB checkpoint in Databricks:java.lang.IllegalStateException: Found no SST files during uploading RocksDB checkpoint version 498 with 2332 key(s). at com.databricks.sql.streaming.s...

  • 2650 Views
  • 7 replies
  • 3 kudos
Latest Reply
susmitsircar
New Contributor III
  • 3 kudos

@mani_22 Do you see any risk of disabling this flag in our pipeline, as we will be bypassing some heuristic checks, as far as i understand, while uploading the state filesspark.databricks.rocksDB.verifyBeforeUpload false 

  • 3 kudos
6 More Replies
absan
by Contributor
  • 983 Views
  • 1 replies
  • 1 kudos

Resolved! Lakeflow Designer, DAB & Git

Hi, i'm trying to understand the process and configuration needed to get the new Lakeflow designer, DAB and Git Folder play together.What i've done:Created an empty Github repository and created a Git Folder for it in DatabricksIn the Git Folder i cr...

  • 983 Views
  • 1 replies
  • 1 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 1 kudos

Hi @absan ,It’s recommended to create and deploy your DAB templates from within the Git folder, as this ensures the pipeline’s root is set correctly to that folder.

  • 1 kudos
noorbasha534
by Valued Contributor II
  • 491 Views
  • 1 replies
  • 0 kudos

Column access patterns

Hello allFloating this question again separately ((few weeks ago I clubbed this as part of predictive optimization)) -Has anyone cracked to get the list if columns being used in joins & filters, especially in the context that access to end users is g...

  • 491 Views
  • 1 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 0 kudos

Hi @noorbasha534 ,I’m not aware of a direct way to do this, but one approach is to parse each view’s SQL definition to identify the columns used in join and filter conditions, then use lineage tools to trace them through nested views back to the unde...

  • 0 kudos
yit
by Databricks Partner
  • 1366 Views
  • 2 replies
  • 0 kudos

Autoloader: Unexpected UnknownFieldException after streaming query termination

I am using Autoloader to ingest source data into Bronze layer Delta tables. The source files are JSON, and I rely on schema inference along with schema evolution (using mode: addNewColumns). To handle errors triggered by schema updates in the stream,...

  • 1366 Views
  • 2 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @yit ,This is expected behaviour of Auto Loader with schema evolution enabled. Default mode is addNewColumns which causes stream fail. As documentation says:"Auto Loader detects the addition of new columns as it processes your data. When Auto Load...

  • 0 kudos
1 More Replies
ChristianRRL
by Honored Contributor
  • 599 Views
  • 1 replies
  • 1 kudos

Resolved! Thoughts on AutoLoader schema inferral into raw table (+data flattening)

I am curious to get the community's thoughts on this. Is it generally preferrable to load raw data based on its inferred columns or not? And is it preferred to keep the raw data in its original structure or to flatten it into a more tabular structure...

ChristianRRL_0-1754923823715.png
  • 599 Views
  • 1 replies
  • 1 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 1 kudos

Hi @ChristianRRL ,When loading raw data into bronze tables with Auto Loader, it’s usually best to keep the original structure rather than flattening it right away. You can use schema inference for convenience, but to avoid mistakes, add schema hints ...

  • 1 kudos
MarkV
by New Contributor III
  • 3257 Views
  • 8 replies
  • 0 kudos

DLT, Automatic Schema Evolution and Type Widening

I'm attempting to run a DLT pipeline that uses automatic schema evolution against tables that have type widening enabled.I have code in this notebook that is a list of tables to create/update along with the schema for those tables. This list and spar...

  • 3257 Views
  • 8 replies
  • 0 kudos
Latest Reply
abhic21
Databricks Partner
  • 0 kudos

Is there any solution for type widening in DLT pipeline ? writeStream is not possible in DLT right ?@Sidhant07 @MarkV  

  • 0 kudos
7 More Replies
lukasz_wybieral
by Databricks Partner
  • 1273 Views
  • 2 replies
  • 0 kudos

Specifying a serverless cluster for the dev environment in databricks.yml

 Hey, I'm trying to find a way to specify a serverless cluster for the dev environment and job clusters for the test and prod environments in databricks.yml.The problem is that it seems impossible - I’ve tried many approaches, but the only outcomes I...

  • 1273 Views
  • 2 replies
  • 0 kudos
Latest Reply
Nivethan_Venkat
Databricks MVP
  • 0 kudos

Hi @lukasz_wybieral, It is not necessary to specify the cluster_config, if you would like to use serverless. Be default, Databricks picks the Serverless cluster if you don't specify the cluster configuration. Attaching below databricks.yml for your r...

  • 0 kudos
1 More Replies
minhhung0507
by Valued Contributor
  • 876 Views
  • 2 replies
  • 0 kudos

Slow batch processing in Databricks job due to high deletion vector and unified unified cache overhe

We have a Databricks pipeline where the layer reads from several Silver tables to detect PK/FK changes and trigger updates to Gold tables. Normally, this near real-time job has ~3 minutes latency per micro-batch.Recently, we noticed that each batch i...

  • 876 Views
  • 2 replies
  • 0 kudos
Latest Reply
noorbasha534
Valued Contributor II
  • 0 kudos

@minhhung0507 as per documentation -'The actual physical removal of deleted rows (the "hard delete") is deferred until the table is optimized with OPTIMIZE or when a VACUUM operation is run, cleaning up old files.'So, based on this, try to optimize t...

  • 0 kudos
1 More Replies
MaximeGendre
by New Contributor III
  • 926 Views
  • 3 replies
  • 3 kudos

Resolved! Structure stream : difference Unity Catalog vs Legacy

Hello :),I have noticed a regression in one of my job and I don't understand why.%python print("Hello 1") def toto(df, _): print("Hello 2") spark.readStream\ .format("delta")\ .load("/databricks-datasets/nyctaxi/tables/nyctaxi_yellow...

  • 926 Views
  • 3 replies
  • 3 kudos
Latest Reply
MaximeGendre
New Contributor III
  • 3 kudos

Hi @szymon_dybczak,thanks a lot for the quick and accurate answer I forgot that there was this limitation.

  • 3 kudos
2 More Replies
RIDBX
by Contributor
  • 880 Views
  • 5 replies
  • 0 kudos

Lake Bridge ETL Rehouse into AWS Data bricks options ?

Lake Bridge ETL Rehouse into AWS Data bricks options ?==========================================Hi Community experts?Thanks for replies to my threads.We reviewed the Lake Bridge thread opened here. The functionality claimed, it can convert on-prem ET...

  • 880 Views
  • 5 replies
  • 0 kudos
Latest Reply
RIDBX
Contributor
  • 0 kudos

Thanks for weighing in. For the same question in another data engineering discussion board not giving  a comfort feeling about this . They project a nightmare scenarios. 

  • 0 kudos
4 More Replies
dimsh
by Contributor
  • 24588 Views
  • 14 replies
  • 10 kudos

How to overcome missing query parameters in Databricks SQL?

Hi, there! I'm trying to build up my first dashboard based on Dataabricks SQL. As far as I can see if you define a query parameter you can't skip it further. I'm looking for any option where I can make my parameter optional. For instance, I have a ta...

  • 24588 Views
  • 14 replies
  • 10 kudos
Latest Reply
theslowturtle
New Contributor II
  • 10 kudos

Hello guys, I'm not sure if you could solve this issue but here is how I've handled it:SELECT *FROM my_tableWHERE (CASE WHEN LEN(:my_parameter) > 0 THEN my_column = :my_parameter ELSE my_column = my_column END)I hope this can help!

  • 10 kudos
13 More Replies
SugathithyanM
by New Contributor
  • 734 Views
  • 1 replies
  • 2 kudos

Resolved! Reg. virtual learning festival coupon

Hi team, I've attended DAIS 2025 Virtual Learning Festival (11 June - 2 July) and received coupon.1. Does the coupon applicable for 'Databricks certified associate developer for apache spark' as well?2. I'm preparing to take exam for spark certificat...

  • 734 Views
  • 1 replies
  • 2 kudos
Latest Reply
Jim_Anderson
Databricks Employee
  • 2 kudos

Hey @SugathithyanM thanks for the additional mention here, please see our conversation for reference. For any others also interested: 1. Yes, the certification voucher code is applicable on any Databricks Certification exam, including the Apache Spar...

  • 2 kudos
Manjula_Ganesap
by Contributor
  • 1098 Views
  • 1 replies
  • 0 kudos

Autoloader on ADLS blobs with archival enabled

Hi All, I'm trying to change our Ingestion process to use Autoloader to identify new files landing in a directory on ADLS. The ADLS directory has access tier enabled to archive files older than a certain time period. When I'm trying to set up Autoloa...

  • 1098 Views
  • 1 replies
  • 0 kudos
Latest Reply
Steffen
New Contributor III
  • 0 kudos

Facing the same issue when trying to use autoloader with useNotifications. Did you ever found a workaround?

  • 0 kudos
Labels