cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

Karlo_Kotarac
by New Contributor III
  • 1168 Views
  • 2 replies
  • 0 kudos

Different error handling behavior after DB runtime upgrade from 13.3 to 14.3

Hi! We want to upgrade the DB runtime on our clusters from 13.3 LTS to 14.3 LTS. Currently, everything looks good except for the different error-handling in the new runtime.For example, the error in the 13.3 LTS runtime looks familiar:while the same ...

Karlo_Kotarac_0-1717743717317.png Karlo_Kotarac_1-1717743788282.png Karlo_Kotarac_3-1717743912995.png
  • 1168 Views
  • 2 replies
  • 0 kudos
Latest Reply
Yeshwanth
Databricks Employee
  • 0 kudos

@Karlo_Kotarac Where do you see this error:   

  • 0 kudos
1 More Replies
thiagoawstest
by Contributor
  • 1534 Views
  • 1 replies
  • 0 kudos

Resolved! mount bucket s3

Hi, I have Databricks configured on AWS, I need to mount some S3 buckets on Databricks in /mnt, but I have some questions:- How can a bucket be mounted for all clusters and users to have access to, so as not to need to mount it every time the cluster...

  • 1534 Views
  • 1 replies
  • 0 kudos
Latest Reply
Yeshwanth
Databricks Employee
  • 0 kudos

@thiagoawstest To mount an S3 bucket in Databricks on AWS so that all clusters and users have access to it without needing to remount each time, and without creating an access key in AWS, follow these steps:  Mounting an S3 Bucket Using an AWS Instan...

  • 0 kudos
NaeemS
by New Contributor III
  • 704 Views
  • 1 replies
  • 0 kudos

Custom transformers with mlflow

Hi Everyone,I have created a spark pipeline in which I have a stage which is a Custom Transformer. Now I am using feature stores to log my model. But the issue is that the custom Transformer stage is not serialized properly and is not logged along wi...

  • 704 Views
  • 1 replies
  • 0 kudos
Latest Reply
NaeemS
New Contributor III
  • 0 kudos

Hi @Retired_mod , Can you please guide me what are the additional steps I'll need to handle serialization of Custom transformers so I can use it in my model pipeline via feature stores.Thanks!

  • 0 kudos
saikumar_ganji
by New Contributor III
  • 2030 Views
  • 7 replies
  • 0 kudos

DATABRICKS DATA ENGINEER ASSOCIATE EXAM GOT SUSPENDED

I encountered Pathetic experience while attempting my Databricks certification. Abruptly, Proctor asked me to show my desk, after showing he/she asked multiple times and then suspended my exam, saying I have exceeded eyes movement and I almost comple...

  • 2030 Views
  • 7 replies
  • 0 kudos
Latest Reply
saikumar_ganji
New Contributor III
  • 0 kudos

@Cert-Team @Cert-TeamOPS @Retired_mod Can you please look into this issue. I have to complete my exam asap

  • 0 kudos
6 More Replies
Karlo_Kotarac
by New Contributor III
  • 2236 Views
  • 4 replies
  • 0 kudos

Run failed with error message ContextNotFound

Hi all!Recently we've been getting lots of these errors when running Databricks notebooks:At that time we observed DRIVER_NOT_RESPONDING (Driver is up but is not responsive, likely due to GC.) log on the single-user cluster we use.Previously when thi...

Karlo_Kotarac_0-1713422302017.png
  • 2236 Views
  • 4 replies
  • 0 kudos
Latest Reply
Karlo_Kotarac
New Contributor III
  • 0 kudos

In case somebody else runs into the same issue: After investigation from Databricks support the conclusion was that the driver's memory was overloaded ('Driver Not Responding' error message in the event log) but it can happen that we don't get the co...

  • 0 kudos
3 More Replies
Lily99
by New Contributor
  • 1026 Views
  • 1 replies
  • 0 kudos

SQL function does not work in 'Create Function'

This SQL statement works fine by itself SELECT COUNT(1) FROM tablea f INNER JOIN tableb t ON lower(f.col1) = t.col1but if I want to use it inside a function:​CREATE OR REPLACE FUNCTION fn_abc(var1 ...

  • 1026 Views
  • 1 replies
  • 0 kudos
Latest Reply
lucasrocha
Databricks Employee
  • 0 kudos

Hello @Lily99 , I hope this message finds you well. Could you please try the code below and let me know the results? CREATE OR REPLACE FUNCTION fn_abc(var1 STRING, var2 STRING) RETURNS DOUBLECOMMENT 'test function'RETURN SELECT    CASE    WHEN EXISTS...

  • 0 kudos
Yash_542965
by New Contributor II
  • 1308 Views
  • 1 replies
  • 0 kudos

DLT aggregation problem

I'm utilizing SQL to perform aggregation operations within a gold layer of a DLT pipeline. However, I'm encountering an error when running the pipeline while attempting to return a data frame using spark.sql.Could anyone please assist me with the SQL...

  • 1308 Views
  • 1 replies
  • 0 kudos
Latest Reply
lucasrocha
Databricks Employee
  • 0 kudos

Hello @Yash_542965 , I hope this message finds you well. Could you please share a sample of code you are using so that we can check it further? Best regards,Lucas Rocha

  • 0 kudos
vijaykumarbotla
by New Contributor III
  • 1294 Views
  • 1 replies
  • 0 kudos

Databricks Notebook error : Analysis Exception with multiple datasets

Hi All,I am getting below error when trying to execute the code.AnalysisException: Column Is There a PO#17748 are ambiguous. It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to ...

  • 1294 Views
  • 1 replies
  • 0 kudos
Latest Reply
lucasrocha
Databricks Employee
  • 0 kudos

Hello @vijaykumarbotla , I hope you're doing well. This is probably because both DataFrames contain a column with the same name, and Spark is unable to determine which one you are referring to in the select statement. To resolve this issue, you can u...

  • 0 kudos
User16752244127
by Contributor
  • 846 Views
  • 1 replies
  • 0 kudos
  • 846 Views
  • 1 replies
  • 0 kudos
Latest Reply
lucasrocha
Databricks Employee
  • 0 kudos

Hello @User16752244127 , I hope this message finds you well. Delta Live Tables supports loading data from any data source supported by Databricks. You can find the datasources supported here Connect to data sources, and JDBC is one of them. You can a...

  • 0 kudos
Sambit_S
by New Contributor III
  • 536 Views
  • 1 replies
  • 0 kudos

Exceptions are Not Getting Handled In Autoloader Write Stream

I have below logic implemented using Databricks Autoloader. ## Autoloader Write stream: Its calling forEachBatch function to write into respective datatype catalog table#  and using checkpoint to keeps track of processing files.try:    ##Observe raw ...

Sambit_S_0-1717689309381.png
  • 536 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 0 kudos

Hello @Sambit_S ,In your scenario, there is a merge failure. Your query won't be able to progress as the problematic batch can't be committed to sink. Even if you handle the exception in a try catch block, it's impossible for the autoloader to update...

  • 0 kudos
tgen
by New Contributor II
  • 1276 Views
  • 1 replies
  • 0 kudos

Increase stack size Databricks

Hi everyoneI'm currently running a shell script in a notebook, and I'm encountering a segmentation fault. This is due to the stack size limitation. I'd like to increase the stack size using ulimit -s unlimited, but I'm facing issues with setting this...

  • 1276 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Hi everyoneI'm currently running a shell script in a notebook, and I'm encountering a segmentation fault. This is due to the stack size limitation. I'd like to increase the stack size using ulimit -s unlimited, but I'm facing issues with setting this...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
satishnavik
by New Contributor II
  • 7992 Views
  • 5 replies
  • 0 kudos

How to connect Databricks Database with Springboot application using JPA

facing issue with integrating our Spring boot JPA supported application with Databricks.Below are the steps and setting we did for the integration.When we are starting the spring boot application we are getting a warning as :HikariPool-1 - Driver doe...

  • 7992 Views
  • 5 replies
  • 0 kudos
Latest Reply
172036
New Contributor II
  • 0 kudos

Was there any resolution to this?  Is Spring datasource supported now?

  • 0 kudos
4 More Replies
JameDavi_51481
by New Contributor III
  • 5185 Views
  • 8 replies
  • 0 kudos

Can we add tags to Unity Catalog through Terraform?

We use Terraform to manage most of our infrastructure, and I would like to extend this to Unity Catalog. However, we are extensive users of tagging to categorize our datasets, and the only programmatic method I can find for adding tags is to use SQL ...

  • 5185 Views
  • 8 replies
  • 0 kudos
Latest Reply
dbruehlmeier
Contributor
  • 0 kudos

Having tags with terraform would help a lot. Add them on cluster and schema level is crucial. Looking forward for an update about the open PR.

  • 0 kudos
7 More Replies
djburnham
by New Contributor III
  • 1617 Views
  • 2 replies
  • 1 kudos

Resolved! How to get a list of workspace users who have the "unrestricted cluster create" entitlement ?

Hello - I'm hoping somebody can help me with this ... I have a lot of users configured with access to a workspace (100's) and I want to write a report to see if any of the users have  "unrestricted cluster create" entitlement in the workspace. This i...

  • 1617 Views
  • 2 replies
  • 1 kudos
Latest Reply
djburnham
New Contributor III
  • 1 kudos

Many thanks for you help @Yeshwanth  it put me on the right track. The API does have a filter option and that looks like it complies with rfc7644 but my attempts to use it were rather hit and miss - I suspect as the API is preview it is not fully imp...

  • 1 kudos
1 More Replies
Anonymous
by Not applicable
  • 6445 Views
  • 11 replies
  • 2 kudos

Sql Serverless Option is missing when using Azure Databricks Workspace with No Public IP and VNET Injection

HelloAfter creating an Databricks Workspace in Azure with No Public IP and VNET Injection, I'm unable to use DBSQL Serverless because the option to enable it in SQL warehouse Settings is missing. ​Is it by design? Is it a limitation when using Privat...

  • 6445 Views
  • 11 replies
  • 2 kudos
Latest Reply
RomanLegion
New Contributor III
  • 2 kudos

Fixed, go to Profile -> Compute->  SQL Server Serverless -> On -> Save. For some reason this has been disabled for us.

  • 2 kudos
10 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels