cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ittzzmalind
by New Contributor
  • 129 Views
  • 2 replies
  • 0 kudos

DLT Pipeline Error -key not found: all_info_dlt_cx_utils_cod resulting in a NoSuchElementException.

Databricks ETL pipeline, specifically an error with the @DP.expectorfail decorator causing the pipeline update to fail. The error message indicated a 'key not found: all_info_dlt_cx_utils_cod ' resulting in a NoSuchElementException.Note: if we commen...

  • 129 Views
  • 2 replies
  • 0 kudos
Latest Reply
ittzzmalind
New Contributor
  • 0 kudos

@MoJaMa Thanks for the reply, The issue was in the code, corrected code worked

  • 0 kudos
1 More Replies
bi123
by Visitor
  • 3 Views
  • 0 replies
  • 0 kudos

How to import python modules in a notebook?

I have a job with a notebook task that utilizes python modules in another folder than the notebook itself. When I try to import the module in the notebook, it raises module not found error. I solved the problem using sys.pathBut I am curious if there...

image.png
  • 3 Views
  • 0 replies
  • 0 kudos
databrciks
by New Contributor II
  • 51 Views
  • 2 replies
  • 0 kudos

Parametrize the DLT pipeline for dynamic loading of many tables

I need to load many tables into Bronze layer connecting to sql server DB. How can i pass the tables names dynamically in DLT. Means one code pass many tables and load into bronze layer

  • 51 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @databrciks, To make sure I've understood your query... Am I right in saying you want to ingest many SQL Server tables into a Bronze layer using DLT with a single reusable pipeline, where table names are passed dynamically rather than writing sepa...

  • 0 kudos
1 More Replies
murtadha_s
by Databricks Partner
  • 8 Views
  • 0 replies
  • 0 kudos

Default ACL for Jobs and Clusters

Hi,I want to set default ACL that applies to all created jobs and clusters, according to a cluster policy for example, but currently I need to apply my ACL at every created job/cluster separately.is there a way to do that?BR,

  • 8 Views
  • 0 replies
  • 0 kudos
sdurai
by Visitor
  • 58 Views
  • 2 replies
  • 0 kudos

Databricks to Salesforce Core (Not cloud)

Hi,Is there any native connector available to connect salesforce core (not cloud) in Databricks? If no native connector, what are all recommended approaches to connect to Salesforce coreThanks,Subashini

  • 58 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @sdurai, Yes. Databricks has a native Salesforce connector for core Salesforce (Sales Cloud / Service Cloud / Platform objects) via Lakeflow Connect - Salesforce ingestion connector. It lets you create fully managed, incremental pipelines from Sal...

  • 0 kudos
1 More Replies
IM_01
by Contributor II
  • 1010 Views
  • 19 replies
  • 3 kudos

Resolved! Lakeflow SDP failed with DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_LOG

Hi,A column was deleted on the source table, when I ran LSDP it failed with error DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_LOG : Streaming read is not supported on tables with read-incompatible schema changes( e.g: rename or drop or datatype ch...

  • 1010 Views
  • 19 replies
  • 3 kudos
Latest Reply
gullsher98743
  • 3 kudos

This looks like a very practical template, especially for teams trying to structure their Data & AI strategy without overcomplicating things. The step-by-step format and examples should be really helpful for workshops and collaborative sessions. Curi...

  • 3 kudos
18 More Replies
mits1
by New Contributor III
  • 102 Views
  • 7 replies
  • 0 kudos

Autoloader inserts null rows in delta table while reading json file

Hi,I am exploring Schema inference and Schema evolution using Autoloader.I am reading a single line json file and writing in a delta table which does not exist already (creating it on the fly), using pyspark (below is the code).Code :spark.readStream...

  • 102 Views
  • 7 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor III
  • 0 kudos

Hi @mits1 can you try adding this option as well:{"multiLine": "true"} 

  • 0 kudos
6 More Replies
stemill
by New Contributor II
  • 434 Views
  • 7 replies
  • 0 kudos

update on iceberg table creating duplicate records

We are using databricks to connect to a glue catalog which contains iceberg tables. We are using DBR 17.2 and adding the jars org.apache.iceberg:iceberg-spark-runtime-4.0_2.13:1.10.0org.apache.iceberg:iceberg-aws-bundle:1.10.0the spark config is then...

  • 434 Views
  • 7 replies
  • 0 kudos
Latest Reply
aleksandra_ch
Databricks Employee
  • 0 kudos

Hi  @stemill , The way of connecting to Iceberg tables managed by Glue catalog that you described is not officially supported. Because spark_catalog is not a generic catalog slot – it’s a special, tightly‑wired session catalog with a lot of assumptio...

  • 0 kudos
6 More Replies
BF7
by Contributor
  • 1619 Views
  • 3 replies
  • 2 kudos

Resolved! Using cloudFiles.inferColumnTypes with inferSchema and without defining schema checkpoint

Two Issues:1. What is the behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema? Why would you use both?2. When can cloudFiles.inferColumnTypes be used without a schema checkpoint?  How does that affect the behavior of cloud...

  • 1619 Views
  • 3 replies
  • 2 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 2 kudos

Behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema:When cloudFiles.inferColumnTypes is enabled, Auto Loader attempts to identify the appropriate data types for columns instead of defaulting everything to strings, which i...

  • 2 kudos
2 More Replies
beaglerot
by Databricks Partner
  • 132 Views
  • 4 replies
  • 5 kudos

Python Data Source API — worth using?

Hi all,I’ve been looking into the Python Data Source API and wanted to get some feedback from others who may be experimenting with it.One of the more common challenges I run into is working with applications that expose APIs but don’t have out-of-the...

  • 132 Views
  • 4 replies
  • 5 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 5 kudos

Adding on to @edonaire, which are accurate. @beaglerot , your contacts project is the right use case for the pattern you have. Small data, infrequent changes, direct read into bronze. That works. The real question you're asking is what happens when t...

  • 5 kudos
3 More Replies
Manjusha
by New Contributor II
  • 101 Views
  • 3 replies
  • 1 kudos

Running python functions (written using polars) on databricks

Hi,We are planning to re-write our application ( which was originally running in R) in python. We chose to use Polars as they seems to be faster than pandas. We have functions written in R which we are planning to convert to Python.However in one of ...

  • 101 Views
  • 3 replies
  • 1 kudos
Latest Reply
Manjusha
New Contributor II
  • 1 kudos

Thank you @Louis_Frolio and @pradeep_singh for the detailed explanation. I will discuss your inputs with the team and get back in case we have more question..

  • 1 kudos
2 More Replies
Labels