cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

oye
by New Contributor II
  • 657 Views
  • 3 replies
  • 0 kudos

Unavailable GPU compute

Hello,I would like to create a ML compute with GPU. I am on GCP europe-west1 and the only available options for me are the G2 family and one instance of the A3 family (a3-highgpu-8g [H100]). I have been trying multiple times at different times but I ...

  • 657 Views
  • 3 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 0 kudos

Hi @oye ,You’re hitting a cloud capacity issue, not a Databricks configuration problem. The Databricks GCP GPU docs list A2 and G2 as the supported GPU instance families. A3/H100 is not in the supported list: https://docs.databricks.com/gcp/en/comput...

  • 0 kudos
2 More Replies
969091
by Databricks Partner
  • 40520 Views
  • 11 replies
  • 10 kudos

Send custom emails from databricks notebook without using third party SMTP server. Would like to utilize databricks existing smtp or databricks api.

We want to use existing databricks smtp server or if databricks api can used to send custom emails. Databricks Workflows sends email notifications on success, failure, etc. of jobs but cannot send custom emails. So we want to send custom emails to di...

  • 40520 Views
  • 11 replies
  • 10 kudos
Latest Reply
Shivaprasad
Contributor
  • 10 kudos

Did you able to get the custom email working from databricks notebook. I was trying but was not successful. let me know

  • 10 kudos
10 More Replies
alesventus
by Contributor
  • 737 Views
  • 5 replies
  • 1 kudos

Resolved! Power BI refresh job task

I have tried Databricks job task to refresh power bi dataset and I have found 2 issues.1. I set up tables in Power BI Desktop using Import mode. After deploying the model to Power BI Service, I was able to download it as an Import mode model. However...

alesventus_0-1765874332890.png alesventus_1-1765874393964.png alesventus_3-1765874486812.png
  • 737 Views
  • 5 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Can you send a screenshot of the refresh power BI task in the jobs UI within Databricks please?  

  • 1 kudos
4 More Replies
timstrath
by New Contributor
  • 559 Views
  • 1 replies
  • 1 kudos

Resolved! Failed to create ingestion gateway due to no 'serverless compute'

Failed to create ingestion gatewayPipelines targeting catalogs using Default Storage must use serverless compute. If you don't have access to serverless compute, please contact Databricks to enable this feature for your workspace.

  • 559 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @timstrath ,It seems that your catalog is backed up by default storage. In that case this error is pretty explicit. You need to use serverless compute to create lakeflow ingestion pipeline if you have catalog using default storage (IBTW  think you...

  • 1 kudos
seefoods
by Valued Contributor
  • 979 Views
  • 2 replies
  • 3 kudos

Resolved! setup databricks connect on VsCode and PyCharm

Hello Guyz,Someone Know what's is the best pratices to setup databricks connect for Pycharm and VsCode using Docker, Justfile and .env file Cordially, Seefoods

  • 979 Views
  • 2 replies
  • 3 kudos
Latest Reply
Gecofer
Contributor II
  • 3 kudos

Hi @seefoods!I’ve worked with Databricks Connect and VSCode in different projects, and although your question mentions Docker, Justfile and .env, the “best practices” really depend on what you’re trying to do. Here’s what has worked best for me:1.- D...

  • 3 kudos
1 More Replies
rc10000
by New Contributor III
  • 992 Views
  • 2 replies
  • 3 kudos

Resolved! Data Bricks Engineer - DEA Exam vs Training

Hi, I love the Databricks resources but I'm a little confused on what training to take. My focus is studying and practicing for the Databricks Engineer Associate exam, but when I hear of the 'training', I'm not sure which training people are referrin...

  • 992 Views
  • 2 replies
  • 3 kudos
Latest Reply
Advika
Community Manager
  • 3 kudos

Hello @rc10000!+1 to what @Louis_Frolio  mentioned above.The Learning Plan is designed for users preparing for the Databricks Certified Data Engineer Associate and Professional exams. Also below are a few paths, depending on what you’re looking for: ...

  • 3 kudos
1 More Replies
rc10000
by New Contributor III
  • 424 Views
  • 1 replies
  • 1 kudos

Resolved! Lakeflow Connect - Databricks Data Engineer Associate Exam Post-July 2025

Hi, I'm asking another Databricks Data Engineer Associate Exam Dec 2025 question. For those who have taken the DEA exam, is Lakeflow Connect a relevant topic for the test? Been a little confused on what resource to rely on besides the official study ...

  • 424 Views
  • 1 replies
  • 1 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 1 kudos

Hi @rc10000,Lakeflow Connect is mentioned in the exam guide under training, but it’s more about the ingestion concepts. These topics come under the Development & Ingestion section. I’d suggest following the official exam guide first and Databricks Ac...

  • 1 kudos
Richard3
by New Contributor III
  • 1267 Views
  • 6 replies
  • 6 kudos

Resolved! IDENTIFIER in SQL Views not supported?

Dear community,We are phasing out the dollar param `${catalog_name}` because it has been deprecated since runtime 15.2.We use this parameter in many queries and should now be replaced by the IDENTIFIER clause.In the query below where we retrieve data...

Richard3_0-1765199283388.png Richard3_1-1765199860462.png
  • 1267 Views
  • 6 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 6 kudos

I have good news: in runtime 18, IDENTIFIER and parameter markers are supported everywhere! We need to wait a month or two as the SQL warehouse and serverless are still on runtime 17.

  • 6 kudos
5 More Replies
lindsey
by New Contributor II
  • 2887 Views
  • 1 replies
  • 1 kudos

"Error: cannot read mws credentials: invalid Databricks Account configuration" on TF Destroy

I have a terraform project that creates a workspace in Databricks, assigns it to an existing metastore, then creates external location/storage credential/catalog. The apply works and all expected resources are created. However, without touching any r...

  • 2887 Views
  • 1 replies
  • 1 kudos
Latest Reply
eduardo_287
New Contributor II
  • 1 kudos

I have the same problem, were you able to solve it?

  • 1 kudos
ndw
by New Contributor III
  • 898 Views
  • 5 replies
  • 1 kudos

Resolved! Extract Snowflake data based on environment

Hi all, In the development workspace, I need to extract data from a table/view in Snowflake development environment. Example table is called as VD_DWH.SALES.SALES_DETAILWhen we deploy the code into production, it needs to extract data from a table/vi...

  • 898 Views
  • 5 replies
  • 1 kudos
Latest Reply
nayan_wylde
Esteemed Contributor II
  • 1 kudos

Create a single job that runs your migration notebook.In the job settings, under Parameters, add a key like env with a default value (e.g., dev).When you create different job runs (or schedule them), override the parameter:For development runs, set e...

  • 1 kudos
4 More Replies
angel_ba
by New Contributor II
  • 3023 Views
  • 3 replies
  • 0 kudos

unity catalog system.access.audit lag

Hello,We have unity catalog enabled workspace. To get the completion time of a pipeline that runs multiple times a day, I am  checking system.access.audit table. Comparing the completion time of the pipeline compared to other pipeline time I am creat...

  • 3023 Views
  • 3 replies
  • 0 kudos
Latest Reply
Raman_Unifeye
Honored Contributor III
  • 0 kudos

@angel_ba - This is expected/designed behaviour.Audit logs are ingested into the system tables asynchronously. Databricks batches these events befor surfacing them in UC system tables. Alternate (prhaps) the best way is to use Job API for start/compl...

  • 0 kudos
2 More Replies
Gaganmjain_012
by New Contributor
  • 391 Views
  • 1 replies
  • 0 kudos

AI/BI Genie

I was working with genie and started using Research agent, and now I want to make the genie as a sharable Infrastructure as Code where I can manage all the changes through GitHub and so does anyone have any suggestions how to do this in a best optimi...

  • 391 Views
  • 1 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor III
  • 0 kudos

Hi @Gaganmjain_012 Hi once in github then if  you want to deploy via asset bundles then https://github.com/databricks/cli/issues/3008 looks like an open request/

  • 0 kudos
hidden
by New Contributor II
  • 611 Views
  • 1 replies
  • 1 kudos

integrating linear app with databricks

 I want to integrate linear app with databricks .. the moment any job fails i want to create a linear task with the error in the description . can you guide me to implement this 

  • 611 Views
  • 1 replies
  • 1 kudos
Latest Reply
ManojkMohan
Honored Contributor II
  • 1 kudos

@hidden Configure a Generic Webhook notification destination in Databricks that points to your own small service (AWS Lambda)In that service, parse the payload, detect job failure events, extract the relevant error/stack trace, and call the Linear Gr...

  • 1 kudos
rc10000
by New Contributor III
  • 962 Views
  • 1 replies
  • 2 kudos

Resolved! Databricks Data Engineer Associate Exam Dec 2025

Hi, I am prepping for the Databricks DEA exam. I am seeing some conflicts with DLT/DP or declarative pipeline syntax. I am trying to see if the most up to date syntax is going to be on the exam as opposed to the legacy DLT. For example, current DP sy...

  • 962 Views
  • 1 replies
  • 2 kudos
Latest Reply
kiwi286dew
New Contributor III
  • 2 kudos

Hello, @rc10000 You are correct to prioritize the current syntax. The most up-to-date Databricks Certified Data Engineer Associate exam will focus on the declarative syntax within Lakeflow Spark Declarative Pipelines. You should prepare for CREATE OR...

  • 2 kudos
dgahram
by New Contributor
  • 369 Views
  • 1 replies
  • 1 kudos

Resolved! DLT File Level Deduplication

I want to create a DLT pipeline that incrementally processes csv files arriving daily. However, some of those files are duplicate - they have the same names and data but are in different directories. What is the best way to handle this? I'm assuming ...

  • 369 Views
  • 1 replies
  • 1 kudos
Latest Reply
K_Anudeep
Databricks Employee
  • 1 kudos

Hello @dgahram ,   Auto Loader tracks ingestion progress by persisting discovered file metadata in a RocksDB store within the checkpoint, which provides “exactly-once” processing for discovered files.Doc: https://docs.databricks.com/aws/en/ingestion...

  • 1 kudos
Labels