cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

NagarajuBondala
by New Contributor II
  • 2340 Views
  • 1 replies
  • 1 kudos

Resolved! AI-Suggested Comments Not Appearing for Delta Live Tables Populated Tables

I'm working with Delta Live Tables (DLT) in Databricks and have noticed that AI-suggested comments for columns are not showing up for tables populated using DLT. Interestingly, this feature works fine for tables that are not populated using DLT. Is t...

Data Engineering
AI
Delta Live Tables
dlt
  • 2340 Views
  • 1 replies
  • 1 kudos
Latest Reply
Satyadeepak
Databricks Employee
  • 1 kudos

It's because materialized view in DLT (MV) and streaming table in DLT (ST) don't support ALTER (which is needed to persist those AI generated comments)

  • 1 kudos
ls
by New Contributor III
  • 1659 Views
  • 3 replies
  • 1 kudos

Resolved! Change spark configs in Serverless compute clusters

Howdy!I wanted to know how I can change some spark configs in a Serverless compute. I have a base.yml file and tried placing: spark_conf:     - spark.driver.maxResultSize: "16g"but I still get his error:[CONFIG_NOT_AVAILABLE] Configuration spark.driv...

  • 1659 Views
  • 3 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

To address the memory issue in your Serverless compute environment, you can consider the following strategies: Optimize the Query: Filter Early: Ensure that you are filtering the data as early as possible in your query to reduce the amount of data b...

  • 1 kudos
2 More Replies
Uj337
by New Contributor III
  • 2744 Views
  • 8 replies
  • 0 kudos

Library installation failed for library due to user error for wheel file

Hi All,Recently we have implemented the change to make databricks workspace accessible only via a private network. After this change, we found lot of errors on connectivity like from Power BI to Databricks, Azure Data factory to Databricks etc.I was ...

  • 2744 Views
  • 8 replies
  • 0 kudos
Latest Reply
Brahmareddy
Honored Contributor III
  • 0 kudos

Hi @Uj337,How are you doing today?This issue seems to be tied to the private network setup affecting access to the .whl file on DBFS. i recommend you to start by ensuring the driver node has proper access to the dbfs:/Volumes/any.whl path and that al...

  • 0 kudos
7 More Replies
jordan_boles
by New Contributor II
  • 1876 Views
  • 1 replies
  • 2 kudos

Future of iceberg-kafka-connect

Databricks acquired the iceberg kafka connect repo this past summer. There are open issues and PRs that devs would like to address and collaborate on to improve the connector. But Databricks has not yet engaged with this community in the ~6 months si...

  • 1876 Views
  • 1 replies
  • 2 kudos
Latest Reply
Brahmareddy
Honored Contributor III
  • 2 kudos

Thanks for sharing this @jordan_boles . Happy Data Engineering!

  • 2 kudos
infinitylearnin
by New Contributor III
  • 343 Views
  • 1 replies
  • 2 kudos

Resolved! Role of Data Practitioner in AI Era

As the AI revolution takes off in 2025, there is a renewed emphasis on adopting a Data-First approach. Organizations are increasingly recognizing the need to establish a robust data foundation while preparing a skilled fleet of Data Engineers to tack...

  • 343 Views
  • 1 replies
  • 2 kudos
Latest Reply
Brahmareddy
Honored Contributor III
  • 2 kudos

Good work @infinitylearnin . Keep it up.

  • 2 kudos
Cantheman
by New Contributor III
  • 904 Views
  • 11 replies
  • 0 kudos

Weird workflow error - Error in run but job does not exist

Hello,I have an error: Joblink (116642657143475)Job run Task runlink (74750905368136)Status  FailedStarted at2025-01-14 07:48:16 UTCDuration2m 40sLaunchedManuallyIf I try to access this job I get  : The job you are looking for may have been moved or ...

  • 904 Views
  • 11 replies
  • 0 kudos
Latest Reply
Cantheman
New Contributor III
  • 0 kudos

will do . Thanks

  • 0 kudos
10 More Replies
somedeveloper
by New Contributor III
  • 1014 Views
  • 3 replies
  • 2 kudos

Resolved! Accessing Application Listening to Port Through Driver Proxy URL

Good afternoon,I have an application, Evidently, that I am starting a dashboard service for and that listens to an open port. I would like to access this through the driver proxy URL, but when starting the service and accessing it, I am given a 502 B...

  • 1014 Views
  • 3 replies
  • 2 kudos
Latest Reply
VZLA
Databricks Employee
  • 2 kudos

Glad it helped! Thanks for confirming the solution.

  • 2 kudos
2 More Replies
iamgoda
by New Contributor III
  • 5961 Views
  • 15 replies
  • 4 kudos

Databricks SQL script slow execution in workflows using serverless

I am running a very simple SQL script within a notebook, using an X-Small SQL Serverless warehouse (that is already running). The execution time is different depending on how it's run:4s if run interactively (and through SQL editor)26s if run within ...

iamgoda_4-1720697910509.png iamgoda_5-1720697937883.png iamgoda_7-1720698691523.png iamgoda_0-1720701617441.png
  • 5961 Views
  • 15 replies
  • 4 kudos
Latest Reply
iamgoce
New Contributor III
  • 4 kudos

So I was told that the Q4 date was incorrect - in fact there is currently no ETA for when this issue will be fixed. It's considered lower priority by Databricks as not enough customers are impacted or have raised this type of an issue. I would recomm...

  • 4 kudos
14 More Replies
neeth
by New Contributor III
  • 996 Views
  • 9 replies
  • 0 kudos

Data bricks -connect error

Hello, I new to Databricks and Scala. I created a scala application in my local machine and tried to connect to my cluster in databricks workspace using databricks connect as per the documentation. My cluster is using Databricks runtime version 16.0 ...

  • 996 Views
  • 9 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor
  • 0 kudos

try this with parameters once: def get_remote_spark(host: str, cluster_id: str, token: str) -> SparkSession:    from databricks.connect import DatabricksSession    return DatabricksSession.builder.remote(host=host, cluster_id=cluster_id, token=token)...

  • 0 kudos
8 More Replies
CBL
by New Contributor
  • 1432 Views
  • 1 replies
  • 0 kudos

Schema Evolution in Azure databricks

Hi All -In my scenario, Loading data from 100 of Json files.Problem is, fields/columns are missing when JSON file contains new fields.Full Load: while writing JSON to delta use the option ("mergeschema", "true") so that we do not miss new columns Inc...

  • 1432 Views
  • 1 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

For these scenarios, you can use schema evolution capabilities like mergeSchema or opt to use the new VariantType to avoid requiring a schema at time of ingest.

  • 0 kudos
TheDataEngineer
by New Contributor
  • 4478 Views
  • 1 replies
  • 0 kudos

'replaceWhere' clause in spark.write for a partitioned table

Hi, I want to be clear about 'replaceWhere' clause in spark.write.Here is the scenario:I would like to add a column to few existing records.The table is already partitioned on "PickupMonth" column.Here is example: Without 'replaceWhere'spark.read \.f...

  • 4478 Views
  • 1 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

For this style of ETL, there are 2 methods. The first method, strictly for partitioned tables, is Dynamic Partition Overwrites, which require a Spark configuration to be set and detect which partitions that are to be overwritten by scanning the input...

  • 0 kudos
jabori
by New Contributor
  • 2707 Views
  • 2 replies
  • 0 kudos

How can I pass job parameters to a dbt task?

I have a dbt task that will use dynamic parameters from the job: {"start_time": "{{job.start_time.[timestamp_ms]}}"}My SQL is edited like this:select 1 as idunion allselect null as idunion allselect {start_time} as idThis causes the task to fail. How...

  • 2707 Views
  • 2 replies
  • 0 kudos
Latest Reply
MathieuDB
Databricks Employee
  • 0 kudos

Also, you need to pass the parameters using the --vars flag like that: dbt run --vars '{"start_time": "{{job.start_time.[timestamp_ms]}}"}' You will need to modify the 3rd dbt command in your job.

  • 0 kudos
1 More Replies
colospring
by New Contributor
  • 1103 Views
  • 2 replies
  • 0 kudos

create_feature_table returns error saying database does not exist while it does

Hi, I am new on databricks and I am taking the training course on databricks machine learning: https://www.databricks.com/resources/webinar/azure-databricks-free-training-series-asset4-track/thank-you. When executing the code to create a feature tabl...

Capture4.JPG
  • 1103 Views
  • 2 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

What would be the result if instead of using ' ' you use ` `? 

  • 0 kudos
1 More Replies
ls
by New Contributor III
  • 525 Views
  • 2 replies
  • 1 kudos

Resolved! Are lambda functions considered bad practice?

As the title suggests I have a bunch of lambda functions within my notebooks and I wanted to know if it is considered to be "bad" to have them in there.output_list = json_files.mapPartitions(lambda partition: iter([process_partition(partition)])) \.f...

  • 525 Views
  • 2 replies
  • 1 kudos
Latest Reply
Satyadeepak
Databricks Employee
  • 1 kudos

Using lambda functions within notebooks is not inherently "bad," but there are some considerations to keep in mind. While this code is functional, chaining multiple lambda functions can reduce readability and debugging capabilities in Databricks note...

  • 1 kudos
1 More Replies
lauraxyz
by Contributor
  • 312 Views
  • 1 replies
  • 0 kudos

Is there a way to analyze/monitor WRITE operations in a Notebook

I have user input as a Notebook, which process data and save it to a global temp view.    Now I have my caller notebook to execute the input Notebook with dbutils.notebook API. Since the user can do anything in their notebook, I would like to analyze...

  • 312 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @lauraxyz, I think you can use the system table and audit logs to achieve that monitoring:  https://docs.databricks.com/en/admin/account-settings/audit-logs.html

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels