cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Ajay-Pandey
by Esteemed Contributor III
  • 2873 Views
  • 8 replies
  • 2 kudos

Databricks Job cluster for continuous run

Hi AllI am having situation where I wanted to run job as continuous trigger by using job cluster, cluster terminating and re-creating in every run within continuous trigger.I just wanted two know if we have any option where I can use same job cluster...

AjayPandey_0-1728973783760.png
  • 2873 Views
  • 8 replies
  • 2 kudos
Latest Reply
Zaranders
New Contributor
  • 2 kudos

This is a great initiative! As a data engineer, I always appreciate learning new optimization strategies. Recently, I stumbled upon Monkey Mart while researching resource-efficient architectures—funny how inspiration comes from unexpected places. Loo...

  • 2 kudos
7 More Replies
xx123
by New Contributor III
  • 1783 Views
  • 1 replies
  • 1 kudos

Comparing Databricks Serverless Warehouse with Snowflake Virtual Warehouse for specific query

Hey,I would like to compare the runtime of one specific query by running it on Databricks Serverless Warehouse and Snowflake Virtual Warehouse.I create table with the exact same structure with the exact same dataset in both Warehouses.the dataset if ...

  • 1783 Views
  • 1 replies
  • 1 kudos
Latest Reply
Krishna_S
Databricks Employee
  • 1 kudos

  You’re running into a Databricks SQL results delivery limit—the UI (and even “Download results”) isn’t meant to stream 1.5M × (id, name, 5,000-double array) back to your browser. That’s why SELECT * “works” on Snowflake’s console but not in the DBS...

  • 1 kudos
KKo
by Contributor III
  • 95 Views
  • 1 replies
  • 1 kudos

DDL script to upper environment

I have multiple databases created in unity catalog in a DEV databricks workspace, I used databricks UI/notebook and ran scripts to do it. Now, I want to have those databases in QA and PROD workspaces as well. What is the best way to run those DDLs in...

  • 95 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @KKo ,The simplest way is to have a parametrized notebook which you can pass a name of your catalog as your parameter. Then you can use that parameter to prepare appropriate SQL statements responsible for creating catalogs/schemas/tables.Alternati...

  • 1 kudos
ckough
by New Contributor III
  • 54880 Views
  • 47 replies
  • 25 kudos

Resolved! Cannot sign in at databricks partner-academy portal

Hi thereI have used my company email to register an account for customer-academy.databricks.com a while back. Now what I need to do is create an account with partner-academy.databricks.com using my company email too.However when I register at partner...

  • 54880 Views
  • 47 replies
  • 25 kudos
Latest Reply
cpelletier360
New Contributor
  • 25 kudos

Also facing the same issue. I will log a ticket.

  • 25 kudos
46 More Replies
elliottatreef
by New Contributor
  • 174 Views
  • 3 replies
  • 1 kudos

Serverless environment not respecting environment spec on run_job_task

When running a job via a `run_job_task`, the job triggered is not using the specified serverless environment. I've configured my job to use serverless `environment_version` "3" with a dependency built into my workspace, but whenever I run the job, it...

Screenshot 2025-10-15 at 11.40.45 AM.png Screenshot 2025-10-15 at 11.43.39 AM.png
  • 174 Views
  • 3 replies
  • 1 kudos
Latest Reply
MuthuLakshmi
Databricks Employee
  • 1 kudos

@elliottatreef Can you try to set the Environment version on the source notebook and then trigger the job?On notebook -> Serverless -> configuration -> Environment version drop down. Then, in your job, making sure it’s assigning to the Serverless com...

  • 1 kudos
2 More Replies
georgemichael40
by New Contributor III
  • 229 Views
  • 4 replies
  • 5 kudos

Resolved! Python Wheel in Serverless Job in DAB

Hey,I am trying to run a job with serverless compute, that runs python scripts.I need the paramiko package to get my scripts to work. I managed to get it working by doing:environments:- environment_key: default# Full documentation of this spec can be...

  • 229 Views
  • 4 replies
  • 5 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 5 kudos

Hi @georgemichael40 ,Put your whl file in the volume and then you can reference it in following way in your DAB file:dependencies: - " /Volumes/workspace/default/my_volume/hellopkg-0.0.1-py3-none-any.whl"https://docs.databricks.com/aws/en/compute/s...

  • 5 kudos
3 More Replies
dndeng
by New Contributor
  • 93 Views
  • 2 replies
  • 0 kudos

Query to calculate cost of task from each job by day

I am trying to find the cost per Task in each Job every time it was executed (daily) but currently getting very huge numbers due to duplicates, can someone help me ?   WITH workspace AS ( SELECT account_id, workspace_id, workspace_name,...

  • 93 Views
  • 2 replies
  • 0 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 0 kudos

It seems the duplicates are caused by the task_change_time from the job_tasks table. Even though the table definition shows task_change_time is the time last time the task was modifed.. But it is capturing different times and it is SCD type 2 table. ...

  • 0 kudos
1 More Replies
thib
by New Contributor III
  • 8696 Views
  • 5 replies
  • 3 kudos

Can we use multiple git repos for a job running multiple tasks?

I have a job running multiple tasks :Task 1 runs a machine learning pipeline from git repo 1Task 2 runs an ETL pipeline from git repo 1Task 2 is actually a generic pipeline and should not be checked in repo 1, and will be made available in another re...

image
  • 8696 Views
  • 5 replies
  • 3 kudos
Latest Reply
tors_r_us
New Contributor II
  • 3 kudos

Had this same problem. Fix was to have two workflows with no triggers, each pointing to the respective git repo. Then setup a 3rd workflow with appropriate triggers/schedule which calls the first 2 workflows. A workflow can run other workflows. 

  • 3 kudos
4 More Replies
shreya24
by New Contributor II
  • 1847 Views
  • 1 replies
  • 2 kudos

Geometry Type not converted into proper binary format when reading through Federated Catalog

Hi,When reading a geometry column from a sql server into Databricks through foreign/federated catalog the tranformation of geometry type to binary type is not in proper format or I am not able to find a way I can decode that binary.for example, for p...

  • 1847 Views
  • 1 replies
  • 2 kudos
Latest Reply
AbhaySingh
New Contributor
  • 2 kudos

Give this a shotCreate a view in SQL Server that converts geometry to Well-Known Text before federating:-- Create view in SQL ServerCREATE VIEW dbo.vw_spatial_converted ASSELECTid,location_name,location.STAsText() AS geom_wkt,location.STSrid() AS sri...

  • 2 kudos
chanukya-pekala
by Contributor III
  • 216 Views
  • 4 replies
  • 4 kudos

Resolved! Lost access to Databricks account console on Free Edition

Hi everyone,I'm having trouble accessing the Databricks account console and need some guidance.Background:I successfully set up Databricks Free Edition with Terraform using my personal accountI was able to access accounts.cloud.databricks.com to obta...

  • 216 Views
  • 4 replies
  • 4 kudos
Latest Reply
chanukya-pekala
Contributor III
  • 4 kudos

I just double checked, I was able to manage my personal workspace through terraform without account console. Thanks again.

  • 4 kudos
3 More Replies
stevewb
by New Contributor III
  • 122 Views
  • 1 replies
  • 0 kudos

Errors in runtime 17 today

Anyone else getting a bunch of errors on runtime 17 today? A load of our pipelines that were running smoothly suddenly stopped working with driver crashes. I was able to get us running again by downgrading to runtime 16, but curious if anyone else hi...

  • 122 Views
  • 1 replies
  • 0 kudos
Latest Reply
MuthuLakshmi
Databricks Employee
  • 0 kudos

@stevewb Driver crash is very generic. We may need to dig deeper here to understand the root cause. Can you raise a support ticket with us? 

  • 0 kudos
surajitDE
by New Contributor III
  • 336 Views
  • 2 replies
  • 0 kudos

Question on assigning email_notification_group to DLT Job Notifications?

Hi Folks,I wanted to check if there’s a way to assign an email notification group to a Delta Live Tables (DLT) job for notifications.I know that it’s possible to configure Teams workflows and email notification groups for Databricks jobs, but in the ...

  • 336 Views
  • 2 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor
  • 0 kudos

Hi @surajitDE ,At the moment, DLT doesn’t support linking existing email notification groups or Teams workflows directly. You can only add individual email addresses in the DLT UI.If you have a group email alias, you can use it as a single address so...

  • 0 kudos
1 More Replies
sgreenuk
by New Contributor
  • 187 Views
  • 1 replies
  • 0 kudos

Orphaned __dlt_materialization schemas left behind after dropping materialized views

Hi everyone,I’m seeing several internal schemas under the __databricks_internal catalog that were auto-created when I built a few materialized views in Databricks SQL. However, after dropping the materialized views, the schemas were not automatically...

  • 187 Views
  • 1 replies
  • 0 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 0 kudos

Yes, this is expected behavior in Databricks. The __databricks_internal catalog contains system-owned schemas that support features like materialized views and Delta Live Tables (DLT). When you create materialized views, Databricks generates internal...

  • 0 kudos
pranaav93
by New Contributor III
  • 143 Views
  • 1 replies
  • 1 kudos

Databricks Compute Metrics Alerts

Hi All,Im looking for some implementation ideas where i can use information from the system.compute.node_timeline table to catch memory spikes and if above a given threshold restart the cluster through an API call. Have any of you implemented a simil...

  • 143 Views
  • 1 replies
  • 1 kudos
Latest Reply
NandiniN
Databricks Employee
  • 1 kudos

Hey @pranaav93  A very common use case for using system table system.compute.node_timeline to build alerting and remediation. Check this KB https://kb.databricks.com/en_US/clusters/getting-node-specific-instead-of-cluster-wide-memory-usage-data-from-...

  • 1 kudos
vpacik
by New Contributor
  • 2216 Views
  • 1 replies
  • 0 kudos

Databricks-connect OpenSSL Handshake failed on WSL2

When trying to setup databricks-connect on WSL2 using 13.3 cluster, I receive the following error regarding OpenSSL CERTIFICATE_ERIFY_FAILED.The authentication is done via SPARK_REMOTE env. variable. E0415 11:24:26.646129568 142172 ssl_transport_sec...

  • 2216 Views
  • 1 replies
  • 0 kudos
Latest Reply
ez
New Contributor II
  • 0 kudos

@vpacik Was it solved? I have the same issue

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels