cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Nidhig
by Contributor
  • 267 Views
  • 1 replies
  • 1 kudos

Resolved! Conversational Agent App integration with genie in Databricks

Hi,I have recently explore the feature of conversational agent app from marketplace integration with Genie Space.The connection setup went well but I could find sync issue between the app and genie space. Even after multiple deployment I couldn't see...

  • 267 Views
  • 1 replies
  • 1 kudos
Latest Reply
HariSankar
Contributor III
  • 1 kudos

Hi @Nidhig,This isn’t expected behavior,it usually happens when the app's service principal lacks permissions to access the SQL warehouse, Genie Space, or underlying Unity Catalog tables.Try these fixes:--> SQL Warehouse: Go to Compute -> SQL Warehou...

  • 1 kudos
Abrarali8708
by New Contributor II
  • 546 Views
  • 4 replies
  • 4 kudos

Resolved! Node type not available in Central India (Student Subscription)

Hi Community,I have deployed an Azure Databricks workspace in the Central India region using a student subscription. While trying to create a compute resource, I encountered an error stating that the selected node type is not available in Central Ind...

  • 546 Views
  • 4 replies
  • 4 kudos
Latest Reply
ManojkMohan
Honored Contributor
  • 4 kudos

@Abrarali8708  As discussed can you trymanaging the Azure Policy definition:Locate the policy definition ID /providers/Microsoft.Authorization/policyDefinitions/b86dabb9-b578-4d7b-b842-3b45e95769a1.Modify the parameter listOfAllowedLocations to inclu...

  • 4 kudos
3 More Replies
Michał
by New Contributor III
  • 1130 Views
  • 5 replies
  • 3 kudos

how to process a streaming lakeflow declarative pipeline in batches

Hi, I've got a problem and I have run out of ideas as to what else I can try. Maybe you can help? I've got a delta table with hundreds millions of records on which I have to perform relatively expensive operations. I'd like to be able to process some...

  • 1130 Views
  • 5 replies
  • 3 kudos
Latest Reply
mmayorga
Databricks Employee
  • 3 kudos

Hi @MichaÅ‚ , One detail/feature to consider when working with Declarative Pipelines is that they manage and auto-tune configuration aspects, including rate limiting (maxBytesPerTrigger or maxFilesPerTrigger). Perhaps that's why you could not see this...

  • 3 kudos
4 More Replies
Data_NXT
by New Contributor III
  • 628 Views
  • 3 replies
  • 3 kudos

Resolved! To change ownership of a materialized view

 working in a Unity Catalog-enabled Databricks workspace, and we have several materialized views (MVs) that were created through a Delta Live Tables (DLT) / Lakeflow pipeline.Currently, the original owner of the pipeline has moved out of the project,...

  • 628 Views
  • 3 replies
  • 3 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 3 kudos

Hi @Data_NXT ,You can change the owner of a materialized view if you are a both a metastore admin and a workspace admin.Use the following steps to change a materialized views owner:Open the materialized view in Catalog Explorer, then on the Overview ...

  • 3 kudos
2 More Replies
Hritik_Moon
by New Contributor II
  • 747 Views
  • 2 replies
  • 2 kudos

Resolved! Save as Delta file in catalog

Hello, I have created data frame on csv file when I try to write it as:df_op_clean.write.format("delta").save("/Volumes/optimisation/trial")I get this error :Cannot access the UC Volume path from this location. Path was /Volumes/optimisation/trial/_d...

  • 747 Views
  • 2 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

Also to add on this:avoid overlap between tables and Volumes.Create a separate folder for tables and files.Unity catalog does this too if you use managed tables/volumes.

  • 2 kudos
1 More Replies
mbanxp
by New Contributor III
  • 387 Views
  • 2 replies
  • 1 kudos

Most suitable Data Promotion orchestration for multi-tenant data lake in Databricks

Hi there !!! I would like to find the most suitable orchestration process to promote data between medallion layers I need to solve the following key architectural decision for scaling my multi-tenant data lake in Databricks.My setup:Independent medal...

  • 387 Views
  • 2 replies
  • 1 kudos
Latest Reply
sarahbhord
Databricks Employee
  • 1 kudos

Hey mbanxp! The most scalable and maintainable orchestration pattern for multi-tenant medallion architectures in Databricks is to build independent pipelines per table for all clients, with each pipeline parameterized by client/tenant. Why this appro...

  • 1 kudos
1 More Replies
jeremy98
by Honored Contributor
  • 1053 Views
  • 6 replies
  • 1 kudos

How to reference a workflow to use multiple GIT sources?

Hi community,Is it possible for a workflow to reference multiple Git sources? Specifically, can different tasks within the same workflow point to different Git repositories or types of Git sources?Ty

  • 1053 Views
  • 6 replies
  • 1 kudos
Latest Reply
mai_luca
New Contributor III
  • 1 kudos

A workflow can reference multiple Git sources. You can specify the git information for each task. However, I am not sure you can have multiple GitProvider for the same workspace.... 

  • 1 kudos
5 More Replies
EricCournarie
by New Contributor III
  • 777 Views
  • 8 replies
  • 10 kudos

ResultSet metadata does not return correct type for TIMESTAMP_NTZ

Hello, using the JDBC driver, when I retrieve the metadata of a ResultSet, the type for a TIMESTAMP_NTZ is not correct (it's a TIMESTAMP one).My SQL is a simple SELECT * on a table where you have a TIMESTAMP_NTZ columnThis works when retrieving metad...

  • 777 Views
  • 8 replies
  • 10 kudos
Latest Reply
Advika
Databricks Employee
  • 10 kudos

Hello @EricCournarie! Just to confirm, were you initially using the JDBC driver v2.7.3? According to the release notes, this version adds support for the TIMESTAMP_NTZ data type.

  • 10 kudos
7 More Replies
karuppusamy
by New Contributor II
  • 547 Views
  • 4 replies
  • 5 kudos

Resolved! Getting an warning message in Declarative Pipelines.

Hi Team,While creating a Declarative ETL pipeline in Databricks, I tried to configure a notebook using the "Add existing assets" option by providing the notebook path. However, I received a warning message:"Legacy configuration detected. Use files in...

  • 547 Views
  • 4 replies
  • 5 kudos
Latest Reply
karuppusamy
New Contributor II
  • 5 kudos

Thank you @szymon_dybczak, Now I have a good clarification from my end. 

  • 5 kudos
3 More Replies
Raj_DB
by Contributor
  • 887 Views
  • 8 replies
  • 7 kudos

Resolved! Streamlining Custom Job Notifications with a Centralized Email List

Hi Everyone,I am working on setting up success/failure notifications for a large number of jobs in our Databricks environment. The manual process of configuring email notification using UI for each job individually is not scalable and is becoming ver...

  • 887 Views
  • 8 replies
  • 7 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 7 kudos

@Raj_DB Databricks sends notifications via its internal email service, which often requires the address to be a valid individual mailbox or a distribution list that accepts external mail. If your group email is a Microsoft 365, Please check if â€œAllow...

  • 7 kudos
7 More Replies
EricCournarie
by New Contributor III
  • 387 Views
  • 2 replies
  • 0 kudos

Filling a STRUCT field with a PreparedStatement in JDBC

Hello, I'm trying to fill a STRUCT field with a PreparedStatement in Java by giving a JSON string in the PreparedStatement.But it complains Cannot resolve "infos" due to data type mismatch: cannot cast "STRING" to "STRUCT<AGE: BIGINT, NAME: STRING>"....

  • 387 Views
  • 2 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Could you provide a sample of JSON string along with a code you're using? Otherwise it will be hard for us to help you.

  • 0 kudos
1 More Replies
yit
by Contributor III
  • 491 Views
  • 2 replies
  • 3 kudos

Resolved! Difference between libraries dlt and dp

In all Databricks documentation, the examples use import dlt to create streaming tables and views. But, when generating sample Python code in ETL pipeline, the import in the sample is:import pyspark import pipelines as dpWhich one is the correct libr...

  • 491 Views
  • 2 replies
  • 3 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 3 kudos

@yit Functionally, they are equivalent concepts (declarative definitions for streaming tables, materialized views, expectations, CDC, etc.). The differences you’ll notice are mostly naming/ergonomics:Module name:Databricks docs & most existing notebo...

  • 3 kudos
1 More Replies
ralphchan
by New Contributor II
  • 3891 Views
  • 5 replies
  • 0 kudos

Connect Oracle Fusion (ERP / HCM) to Databricks

Any suggestion to connect Oracle Fusion (ERP/HCM) to Databricks?I have explored a few options including the use of Oracle Integration Cloud but it requires a lot of customization.

  • 3891 Views
  • 5 replies
  • 0 kudos
Latest Reply
NikhilKamble
New Contributor II
  • 0 kudos

Hey Ralph,Orbit datajump is one of the good options in the market. Try it out.

  • 0 kudos
4 More Replies
kenmyers-8451
by Contributor
  • 926 Views
  • 4 replies
  • 0 kudos

bug with using parameters in a sql task

I am trying to make a sql task that runs using a serverless sql warehouse that takes a variable and uses that in the sql file that it is running in a serverless warehouse, however I am getting errors because databricks keeps formatting it first with ...

kenmyers8451_0-1747677585574.png
  • 926 Views
  • 4 replies
  • 0 kudos
Latest Reply
asuvorkin
New Contributor II
  • 0 kudos

I have been trying to use templates as well and got the following string:LOCATION 's3://s3-company-data-' dev '-' 1122334455 '-eu-central-1/path_to_churn/main/'

  • 0 kudos
3 More Replies
data-grassroots
by New Contributor III
  • 396 Views
  • 4 replies
  • 1 kudos

Resolved! ExcelWriter and local files

I have a couple things going on here.First, to explain what I'm doing, I'm passing an array of objects in to a function that contain a dataframe per item. I want to write those dataframes to an excel workbook - one dataframe per worksheet. That part ...

  • 396 Views
  • 4 replies
  • 1 kudos
Latest Reply
Advika
Databricks Employee
  • 1 kudos

Hello @data-grassroots! Were you able to resolve this? If any of the suggestions shared above helped, or if you found another solution, it would be great if you could mark it as the accepted solution or share your approach with the community.

  • 1 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels