cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Hritik_Moon
by New Contributor II
  • 79 Views
  • 2 replies
  • 2 kudos

Reading snappy.parquet

I stored a dataframe as delta in the catalog. It created multiple folders with snappy.parquet files. Is there a way to read these snappy.parquet files.it reads with pandas but with spark it gives error "incompatible format"

  • 79 Views
  • 2 replies
  • 2 kudos
Latest Reply
Prajapathy_NKR
  • 2 kudos

@Hritik_Moon Try to read the file as delta. path/delta_file_name/- parquet files- delta_log/since you are using spark, use this, spark.read.format("delta").load("path/delta_file_name").Delta internally stores the data as parquet and delta log contain...

  • 2 kudos
1 More Replies
Hritik_Moon
by New Contributor II
  • 203 Views
  • 6 replies
  • 8 kudos

Stop Cache in free edition

Hello,I am using databricks free edition, is there a way to turn off IO caching.I am trying to learn optimization and cant see any difference in query run time with caching enabled.

  • 203 Views
  • 6 replies
  • 8 kudos
Latest Reply
Prajapathy_NKR
  • 8 kudos

@Hritik_Moon 1. check if your data is cached, this you can see in sparkUI > storage tab.2. if it is not cached, try to add a action statement after you cache. eg : df.count(). Data is cached with the first action statement it encounters. Now check in...

  • 8 kudos
5 More Replies
Jonathan_
by New Contributor II
  • 120 Views
  • 3 replies
  • 3 kudos

Slow PySpark operations after long DAG that contains many joins and transformations

We are using PySpark and notice that when we are doing many transformations/aggregations/joins of the data then at some point the execution time of simple task (count, display, union of 2 tables, ...) become very slow even if we have a small data (ex...

  • 120 Views
  • 3 replies
  • 3 kudos
Latest Reply
Prajapathy_NKR
  • 3 kudos

@Jonathan_  1. I noticed you had tried to persist your result, just a remainder that dataframe is stored only if an action is performed. So if you would like to store the result in memory, try to add a action like count immediately after using persis...

  • 3 kudos
2 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 2827 Views
  • 8 replies
  • 2 kudos

Databricks Job cluster for continuous run

Hi AllI am having situation where I wanted to run job as continuous trigger by using job cluster, cluster terminating and re-creating in every run within continuous trigger.I just wanted two know if we have any option where I can use same job cluster...

AjayPandey_0-1728973783760.png
  • 2827 Views
  • 8 replies
  • 2 kudos
Latest Reply
Zaranders
Visitor
  • 2 kudos

This is a great initiative! As a data engineer, I always appreciate learning new optimization strategies. Recently, I stumbled upon Monkey Mart while researching resource-efficient architectures—funny how inspiration comes from unexpected places. Loo...

  • 2 kudos
7 More Replies
xx123
by New Contributor III
  • 1727 Views
  • 1 replies
  • 0 kudos

Comparing Databricks Serverless Warehouse with Snowflake Virtual Warehouse for specific query

Hey,I would like to compare the runtime of one specific query by running it on Databricks Serverless Warehouse and Snowflake Virtual Warehouse.I create table with the exact same structure with the exact same dataset in both Warehouses.the dataset if ...

  • 1727 Views
  • 1 replies
  • 0 kudos
Latest Reply
Krishna_S
Databricks Employee
  • 0 kudos

  You’re running into a Databricks SQL results delivery limit—the UI (and even “Download results”) isn’t meant to stream 1.5M × (id, name, 5,000-double array) back to your browser. That’s why SELECT * “works” on Snowflake’s console but not in the DBS...

  • 0 kudos
KKo
by Contributor III
  • 22 Views
  • 1 replies
  • 0 kudos

DDL script to upper environment

I have multiple databases created in unity catalog in a DEV databricks workspace, I used databricks UI/notebook and ran scripts to do it. Now, I want to have those databases in QA and PROD workspaces as well. What is the best way to run those DDLs in...

  • 22 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @KKo ,The simplest way is to have a parametrized notebook which you can pass a name of your catalog as your parameter. Then you can use that parameter to prepare appropriate SQL statements responsible for creating catalogs/schemas/tables.Alternati...

  • 0 kudos
Bhavana_Y
by New Contributor
  • 23 Views
  • 0 replies
  • 0 kudos

Learning Path for Spark Developer Associate

Hello Everyone,Happy for being a part of Virtual Journey !!Enrolled in Associate Spark Developer and completed learning path in Databricks Academy. Can anyone please confirm is completing learning path enough for obtaining 50% off voucher for certifi...

Screenshot (15).png
  • 23 Views
  • 0 replies
  • 0 kudos
ckough
by New Contributor III
  • 54751 Views
  • 47 replies
  • 25 kudos

Resolved! Cannot sign in at databricks partner-academy portal

Hi thereI have used my company email to register an account for customer-academy.databricks.com a while back. Now what I need to do is create an account with partner-academy.databricks.com using my company email too.However when I register at partner...

  • 54751 Views
  • 47 replies
  • 25 kudos
Latest Reply
cpelletier360
New Contributor
  • 25 kudos

Also facing the same issue. I will log a ticket.

  • 25 kudos
46 More Replies
elliottatreef
by New Contributor
  • 71 Views
  • 3 replies
  • 1 kudos

Serverless environment not respecting environment spec on run_job_task

When running a job via a `run_job_task`, the job triggered is not using the specified serverless environment. I've configured my job to use serverless `environment_version` "3" with a dependency built into my workspace, but whenever I run the job, it...

Screenshot 2025-10-15 at 11.40.45 AM.png Screenshot 2025-10-15 at 11.43.39 AM.png
  • 71 Views
  • 3 replies
  • 1 kudos
Latest Reply
MuthuLakshmi
Databricks Employee
  • 1 kudos

@elliottatreef Can you try to set the Environment version on the source notebook and then trigger the job?On notebook -> Serverless -> configuration -> Environment version drop down. Then, in your job, making sure it’s assigning to the Serverless com...

  • 1 kudos
2 More Replies
donlxz
by New Contributor III
  • 94 Views
  • 3 replies
  • 2 kudos

deadlock occurs with use statement

When issuing a query from Informatica using a Delta connection, the statement use catalog_name.schema_name is executed first. At that time, the following error appeared in the query history:Query could not be scheduled: (conn=5073499)Deadlock found w...

  • 94 Views
  • 3 replies
  • 2 kudos
Latest Reply
donlxz
New Contributor III
  • 2 kudos

Hi @ManojkMohan Thank you for your response.I understand that adjustments are needed on the Informatica side, and I’ll ask them to review the deadlock retry settings.Is there anything that can be changed or configured on the Databricks side to help w...

  • 2 kudos
2 More Replies
Mous92i
by New Contributor
  • 100 Views
  • 2 replies
  • 0 kudos

Liquid Clustering With Merge

Hello I’m facing severe performance issues with a  merge into databricksmerge_condition = """ source.data_hierarchy = target.data_hierarchy AND source.sensor_id = target.sensor_id AND source.timestamp = target.timestamp """The target Delt...

  • 100 Views
  • 2 replies
  • 0 kudos
Latest Reply
K_Anudeep
Databricks Employee
  • 0 kudos

Hi @Mous92i  DFP is what pushes source filters down to the target to skip files. For MERGE/UPDATE/DELETE, DFP only works on Photon-enabled compute. If you’re not on Photon, MERGE will scan everything.Enabling Liquid Clustering doesn’t recluster past ...

  • 0 kudos
1 More Replies
georgemichael40
by New Contributor III
  • 109 Views
  • 4 replies
  • 5 kudos

Resolved! Python Wheel in Serverless Job in DAB

Hey,I am trying to run a job with serverless compute, that runs python scripts.I need the paramiko package to get my scripts to work. I managed to get it working by doing:environments:- environment_key: default# Full documentation of this spec can be...

  • 109 Views
  • 4 replies
  • 5 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 5 kudos

Hi @georgemichael40 ,Put your whl file in the volume and then you can reference it in following way in your DAB file:dependencies: - " /Volumes/workspace/default/my_volume/hellopkg-0.0.1-py3-none-any.whl"https://docs.databricks.com/aws/en/compute/s...

  • 5 kudos
3 More Replies
dndeng
by Visitor
  • 30 Views
  • 2 replies
  • 0 kudos

Query to calculate cost of task from each job by day

I am trying to find the cost per Task in each Job every time it was executed (daily) but currently getting very huge numbers due to duplicates, can someone help me ?   WITH workspace AS ( SELECT account_id, workspace_id, workspace_name,...

  • 30 Views
  • 2 replies
  • 0 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 0 kudos

It seems the duplicates are caused by the task_change_time from the job_tasks table. Even though the table definition shows task_change_time is the time last time the task was modifed.. But it is capturing different times and it is SCD type 2 table. ...

  • 0 kudos
1 More Replies
thib
by New Contributor III
  • 8554 Views
  • 5 replies
  • 3 kudos

Can we use multiple git repos for a job running multiple tasks?

I have a job running multiple tasks :Task 1 runs a machine learning pipeline from git repo 1Task 2 runs an ETL pipeline from git repo 1Task 2 is actually a generic pipeline and should not be checked in repo 1, and will be made available in another re...

image
  • 8554 Views
  • 5 replies
  • 3 kudos
Latest Reply
tors_r_us
New Contributor II
  • 3 kudos

Had this same problem. Fix was to have two workflows with no triggers, each pointing to the respective git repo. Then setup a 3rd workflow with appropriate triggers/schedule which calls the first 2 workflows. A workflow can run other workflows. 

  • 3 kudos
4 More Replies
shreya24
by New Contributor II
  • 1783 Views
  • 1 replies
  • 2 kudos

Geometry Type not converted into proper binary format when reading through Federated Catalog

Hi,When reading a geometry column from a sql server into Databricks through foreign/federated catalog the tranformation of geometry type to binary type is not in proper format or I am not able to find a way I can decode that binary.for example, for p...

  • 1783 Views
  • 1 replies
  • 2 kudos
Latest Reply
AbhaySingh
New Contributor
  • 2 kudos

Give this a shotCreate a view in SQL Server that converts geometry to Well-Known Text before federating:-- Create view in SQL ServerCREATE VIEW dbo.vw_spatial_converted ASSELECTid,location_name,location.STAsText() AS geom_wkt,location.STSrid() AS sri...

  • 2 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels