cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

EricCournarie
by New Contributor III
  • 510 Views
  • 8 replies
  • 10 kudos

ResultSet metadata does not return correct type for TIMESTAMP_NTZ

Hello, using the JDBC driver, when I retrieve the metadata of a ResultSet, the type for a TIMESTAMP_NTZ is not correct (it's a TIMESTAMP one).My SQL is a simple SELECT * on a table where you have a TIMESTAMP_NTZ columnThis works when retrieving metad...

  • 510 Views
  • 8 replies
  • 10 kudos
Latest Reply
Advika
Databricks Employee
  • 10 kudos

Hello @EricCournarie! Just to confirm, were you initially using the JDBC driver v2.7.3? According to the release notes, this version adds support for the TIMESTAMP_NTZ data type.

  • 10 kudos
7 More Replies
karuppusamy
by New Contributor
  • 322 Views
  • 4 replies
  • 5 kudos

Resolved! Getting an warning message in Declarative Pipelines.

Hi Team,While creating a Declarative ETL pipeline in Databricks, I tried to configure a notebook using the "Add existing assets" option by providing the notebook path. However, I received a warning message:"Legacy configuration detected. Use files in...

  • 322 Views
  • 4 replies
  • 5 kudos
Latest Reply
karuppusamy
New Contributor
  • 5 kudos

Thank you @szymon_dybczak, Now I have a good clarification from my end. 

  • 5 kudos
3 More Replies
Raj_DB
by Contributor
  • 434 Views
  • 8 replies
  • 7 kudos

Resolved! Streamlining Custom Job Notifications with a Centralized Email List

Hi Everyone,I am working on setting up success/failure notifications for a large number of jobs in our Databricks environment. The manual process of configuring email notification using UI for each job individually is not scalable and is becoming ver...

  • 434 Views
  • 8 replies
  • 7 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 7 kudos

@Raj_DB Databricks sends notifications via its internal email service, which often requires the address to be a valid individual mailbox or a distribution list that accepts external mail. If your group email is a Microsoft 365, Please check if â€œAllow...

  • 7 kudos
7 More Replies
EricCournarie
by New Contributor III
  • 199 Views
  • 2 replies
  • 0 kudos

Filling a STRUCT field with a PreparedStatement in JDBC

Hello, I'm trying to fill a STRUCT field with a PreparedStatement in Java by giving a JSON string in the PreparedStatement.But it complains Cannot resolve "infos" due to data type mismatch: cannot cast "STRING" to "STRUCT<AGE: BIGINT, NAME: STRING>"....

  • 199 Views
  • 2 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Could you provide a sample of JSON string along with a code you're using? Otherwise it will be hard for us to help you.

  • 0 kudos
1 More Replies
yit
by Contributor
  • 210 Views
  • 2 replies
  • 3 kudos

Resolved! Difference between libraries dlt and dp

In all Databricks documentation, the examples use import dlt to create streaming tables and views. But, when generating sample Python code in ETL pipeline, the import in the sample is:import pyspark import pipelines as dpWhich one is the correct libr...

  • 210 Views
  • 2 replies
  • 3 kudos
Latest Reply
nayan_wylde
Honored Contributor III
  • 3 kudos

@yit Functionally, they are equivalent concepts (declarative definitions for streaming tables, materialized views, expectations, CDC, etc.). The differences you’ll notice are mostly naming/ergonomics:Module name:Databricks docs & most existing notebo...

  • 3 kudos
1 More Replies
SuMiT1
by New Contributor III
  • 337 Views
  • 8 replies
  • 3 kudos

Flattening the json in databricks

I have chatbot data  I read adls json file in databricks and i stored the output in dataframeIn that table two columns contains json data but the data type is string1.content2.metadata Now i have to flatten the.data but i am not getting how to do tha...

  • 337 Views
  • 8 replies
  • 3 kudos
Latest Reply
SuMiT1
New Contributor III
  • 3 kudos

Hi @szymon_dybczak I gave the wrong content json valueHere is the updated one could you please tell me the code for this it would be helpful for me you gave the code already but i am getting confused so please tell me for this { "activities": [ { "va...

  • 3 kudos
7 More Replies
ralphchan
by New Contributor II
  • 3564 Views
  • 5 replies
  • 0 kudos

Connect Oracle Fusion (ERP / HCM) to Databricks

Any suggestion to connect Oracle Fusion (ERP/HCM) to Databricks?I have explored a few options including the use of Oracle Integration Cloud but it requires a lot of customization.

  • 3564 Views
  • 5 replies
  • 0 kudos
Latest Reply
NikhilKamble
New Contributor
  • 0 kudos

Hey Ralph,Orbit datajump is one of the good options in the market. Try it out.

  • 0 kudos
4 More Replies
kenmyers-8451
by Contributor
  • 860 Views
  • 4 replies
  • 0 kudos

bug with using parameters in a sql task

I am trying to make a sql task that runs using a serverless sql warehouse that takes a variable and uses that in the sql file that it is running in a serverless warehouse, however I am getting errors because databricks keeps formatting it first with ...

kenmyers8451_0-1747677585574.png
  • 860 Views
  • 4 replies
  • 0 kudos
Latest Reply
asuvorkin
New Contributor
  • 0 kudos

I have been trying to use templates as well and got the following string:LOCATION 's3://s3-company-data-' dev '-' 1122334455 '-eu-central-1/path_to_churn/main/'

  • 0 kudos
3 More Replies
data-grassroots
by New Contributor III
  • 263 Views
  • 4 replies
  • 1 kudos

Resolved! ExcelWriter and local files

I have a couple things going on here.First, to explain what I'm doing, I'm passing an array of objects in to a function that contain a dataframe per item. I want to write those dataframes to an excel workbook - one dataframe per worksheet. That part ...

  • 263 Views
  • 4 replies
  • 1 kudos
Latest Reply
Advika
Databricks Employee
  • 1 kudos

Hello @data-grassroots! Were you able to resolve this? If any of the suggestions shared above helped, or if you found another solution, it would be great if you could mark it as the accepted solution or share your approach with the community.

  • 1 kudos
3 More Replies
fjrodriguez
by New Contributor III
  • 200 Views
  • 5 replies
  • 4 kudos

Resolved! Self Dependency TumblingWindowTrigger in adf

Hey !I would like to migrate one ADF batch ingestion which has a TumblingWindowTrigger  on top of the pipeline which pretty much check each 15 min if a file is landing, normally the files land in daily basis so will process it accordingly once in a d...

  • 200 Views
  • 5 replies
  • 4 kudos
Latest Reply
fjrodriguez
New Contributor III
  • 4 kudos

Hi @szymon_dybczak ,sounds reasonable, will propone this approach. Thanks

  • 4 kudos
4 More Replies
Hritik_Moon
by New Contributor II
  • 528 Views
  • 12 replies
  • 17 kudos

Accessing Spark UI in free edition

Hello, is it possible to access Spark UI in free edition, I want to check task and stages.Ultimately I am working on how to check data skewness.

  • 528 Views
  • 12 replies
  • 17 kudos
Latest Reply
Hritik_Moon
New Contributor II
  • 17 kudos

@szymon_dybczak @BS_THE_ANALYST Is there a specific guide or a flow to be a better databricks data engineer. I am learning as the topic comes up.Finding it really difficult to maintain a flow and I lose track.

  • 17 kudos
11 More Replies
perenehal
by New Contributor
  • 105 Views
  • 2 replies
  • 1 kudos

Unable to Verify Account – "User is not a member of this workspace" Community addition..

I am encountering an issue during the verification process while trying to access my Databricks account.I received the verification code via email; however, when I attempt to verify it, I receive the following error message:"User is not a member of t...

  • 105 Views
  • 2 replies
  • 1 kudos
Latest Reply
Advika
Databricks Employee
  • 1 kudos

Hello @perenehal! The error indicates that you haven’t created an account in the Databricks Community Edition. Currently, creating new accounts for Community Edition is not possible. However, if you already had an existing account and are still seein...

  • 1 kudos
1 More Replies
JanAkhi919
by New Contributor
  • 3227 Views
  • 2 replies
  • 1 kudos

How agentic ai is different from Ai agents

How agentic ai different from ai agents

  • 3227 Views
  • 2 replies
  • 1 kudos
Latest Reply
CrossMLPvtLtd
New Contributor
  • 1 kudos

Hey there! Let me break this down simply.What's an AI Agent?An AI agent is like a smart worker who follows instructions for one specific job. Think of a customer service chatbot that answers "Where's my order?" perfectly. But ask something complex li...

  • 1 kudos
1 More Replies
bunny1174
by New Contributor
  • 75 Views
  • 1 replies
  • 1 kudos

Spark Streaming Loading 1kto 5k rows only delta table

Hi Team,I have 4-5 millions of files in s3 files around 1.5gb data only with 9 million records, when i try to use autoloader to read the data using read stream and writing to delta table the processing is taking too much time, it is loading from 1k t...

  • 75 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @bunny1174 ,You have 4-5 millions of files in s3 and their size is 1.5gb - this clearly indicates small files problem. You need compact those files to bigger size. There's no way your pipeline will be performant if you have such many files and the...

  • 1 kudos
liu
by New Contributor III
  • 170 Views
  • 7 replies
  • 4 kudos

configure AWS authentication for serverless Spark

I only have an AWS Access Key ID and Secret Access Key, and I want to use this information to access S3.However, the official documentation states that I need to set the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables, but I cannot ...

  • 170 Views
  • 7 replies
  • 4 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 4 kudos

Hi @liu ,The proper way is to go to your cluster and in advanced section you can set them up. In that way they will be scoped at cluster level.  It's recommended to store values itself in a secret scopes as environment variables:Use a secret in a Spa...

  • 4 kudos
6 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels