cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

rohit8491
by New Contributor III
  • 5652 Views
  • 3 replies
  • 8 kudos

Azure Databricks Connectivity with Power BI Cloud - Firewall Whitelisting

Hi Support TeamWe want to connect to tables in Azure Databricks via Power BI. We are able to connect this via Power BI Desktop but when we try to Publish the same, we can see the dataset associated does not refresh and throws error from Powerbi.comIt...

  • 5652 Views
  • 3 replies
  • 8 kudos
Latest Reply
rohit8491
New Contributor III
  • 8 kudos

Hi NoorThank you soo much for your response. Please see the below details for the error message. I just got to know that Power BI are Azure Databricks are in different tenants. Do you think it causes any issues? Do we need VNet peering to be configur...

  • 8 kudos
2 More Replies
keenan_jones7
by New Contributor II
  • 11611 Views
  • 2 replies
  • 5 kudos

Cannot create job through Jobs API

import requests import json instance_id = 'abcd.azuredatabricks.net' api_version = '/api/2.0' api_command = '/jobs/create' url = f"https://{instance_id}{api_version}{api_command}" headers = {'Authorization': 'Bearer myToken'} params = { "settings...

  • 11611 Views
  • 2 replies
  • 5 kudos
Latest Reply
rAlex
New Contributor III
  • 5 kudos

@keenan_jones7​ I had the same problem today. It looks like you've copied and pasted the JSON that Databricks displays in the GUI when you select View JSON from the dropdown menu when viewing a job.In order to use that JSON in a request to the Jobs ...

  • 5 kudos
1 More Replies
adrianlwn
by New Contributor III
  • 14569 Views
  • 14 replies
  • 16 kudos

How to activate ignoreChanges in Delta Live Table read_stream ?

Hello everyone, I'm using DLT (Delta Live Tables) and I've implemented some Change Data Capture for deduplication purposes. Now I am creating a downstream table that will read the DLT as a stream (dlt.read_stream("<tablename>")). I keep receiving thi...

  • 14569 Views
  • 14 replies
  • 16 kudos
Latest Reply
gopínath
New Contributor II
  • 16 kudos

In DLT read_stream, we can't use ignoreChanges / ignoreDeletes. These are the configs helps to avoid the failures but it is actually ignoring the operations done on the upstream. So you need to manually perform the deletes or updates in the downstrea...

  • 16 kudos
13 More Replies
Colter
by New Contributor II
  • 2452 Views
  • 3 replies
  • 0 kudos

Is there a way to use cluster policies within jobs api to define cluster configuration rather than in the jobs api itself?

I want to create a cluster policy that is referenced by most of our repos/jobs so we have one place to update whenever there is a spark version change or when we need to add additional spark configurations. I figured cluster policies might be a good ...

  • 2452 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Colter Nattrass​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answe...

  • 0 kudos
2 More Replies
tototox
by New Contributor III
  • 3156 Views
  • 3 replies
  • 2 kudos

dbutils.fs.ls overlaps with managed storage error

I created a schema with that route as a managed location.(abfss://~~@~~.dfs.core.windows.net/dejeong/)However, I dropped shcema with the cascade option, and also entered the azure portal and deleted the path directly. and made it again(abfss://~~@~~....

  • 3156 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @jin park​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your...

  • 2 kudos
2 More Replies
Dean_Lovelace
by New Contributor III
  • 2960 Views
  • 3 replies
  • 4 kudos

What is the Pyspark equivalent of FSCK REPAIR TABLE?

I am using the delta format and occasionaly get the following error:-"xx.parquet referenced in the transaction log cannot be found. This occurs when data has been manually deleted from the file system rather than using the table `DELETE` statement"FS...

  • 2960 Views
  • 3 replies
  • 4 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 4 kudos

## Delta check when a file was added %scala (oldest-version-available to newest-version-available).map { version => var df = spark.read.json(f"<delta-table-location>/_delta_log/$version%020d.json").where("add is not null").select("add.path") var ...

  • 4 kudos
2 More Replies
Dean_Lovelace
by New Contributor III
  • 4990 Views
  • 3 replies
  • 0 kudos

Delta Table Optimize Error

I have have started getting an error message when running the following optimize command:-deltaTable.optimize().executeCompaction()Error:-java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Number of records changed after Optimi...

  • 4990 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Dean Lovelace​ :The error message suggests that the number of records in the Delta table changed after the optimize() command was run. The optimize() command is used to improve the performance of Delta tables by removing small files and compacting l...

  • 0 kudos
2 More Replies
haraldh
by New Contributor II
  • 1834 Views
  • 1 replies
  • 2 kudos

Databericks JDBC driver connection pooling support

When using Camel JDBC with Databricks JDBC driver I get an error: Caused by: java.sql.SQLFeatureNotSupportedException: [Databricks][JDBC](10220) Driver does not support this optional feature.Is there any means to work around this limitation?

  • 1834 Views
  • 1 replies
  • 2 kudos
Latest Reply
swethaNandan
Databricks Employee
  • 2 kudos

Tools like SDI can connect to a generic JDBC source such as Databricks SQL Warehouse via the SDI Camel JDBC adapter. can you see  if these will help you https://help.sap.com/docs/HANA_SMART_DATA_INTEGRATION/7952ef28a6914997abc01745fef1b607/1247c9518...

  • 2 kudos
System1999
by New Contributor III
  • 6051 Views
  • 7 replies
  • 0 kudos

My 'Data' menu item shows 'No Options' for Databases. How can I fix?

Hi, I'm new to Databricks and I've signed up for the Community edition.First, I've noticed that I cannot return to a previously created cluster, as I get the message telling me that restarting a cluster is not available to me. Ok, inconvenient, but I...

error
  • 6051 Views
  • 7 replies
  • 0 kudos
Latest Reply
System1999
New Contributor III
  • 0 kudos

Hi @Suteja Kanuri​ ,I get the error message under Data before I've created a cluster. Then I still get it when I've created a cluster and a notebook (having attached the notebook to the cluster). Thanks.

  • 0 kudos
6 More Replies
Student185
by New Contributor III
  • 9773 Views
  • 7 replies
  • 5 kudos

Resolved! Is that long-term free version for students still available now?

Dear sir/madam,I've tried lots of methods in order to access the long-term Databricks' free version - community version for students.Also, I followed the instructions - Introduction to Databricks - in Coursera step by step: https://www.coursera.org/l...

  • 9773 Views
  • 7 replies
  • 5 kudos
Latest Reply
shreeves
New Contributor II
  • 5 kudos

Look for the "Community Edition" in small print below the button

  • 5 kudos
6 More Replies
Anonymous
by Not applicable
  • 666 Views
  • 1 replies
  • 2 kudos

www.databricks.com

Dear Community - @Youssef Mrini​ will answer all your questions on April 19, 2023 from 9:00am to 10:00am GMT during the Databricks EMEA Office Hours.Make sure to join this amazing 'Ask Me Anything' session by Databricks - https://www.databricks.com/r...

  • 666 Views
  • 1 replies
  • 2 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 2 kudos

It was a successful office hours. Make sure to join the next one.

  • 2 kudos
youssefmrini
by Databricks Employee
  • 1612 Views
  • 1 replies
  • 0 kudos
  • 1612 Views
  • 1 replies
  • 0 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 0 kudos

Make sure to watch the following video https://www.youtube.com/watch?v=DkzwFTC7WWsThis section lists the requirements for Databricks Connect.Only Databricks Runtime 13.0 ML and Databricks Runtime 13.0 are supported.Only clusters that are compatible w...

  • 0 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 1392 Views
  • 2 replies
  • 8 kudos

databricks has recently introduced a new SQL function allowing easy integration of LLM (Language Model) models with Databricks. This exciting new feat...

databricks has recently introduced a new SQL function allowing easy integration of LLM (Language Model) models with Databricks. This exciting new feature simplifies calling LLM models, making them more accessible and user-friendly. To try it out, che...

Untitled
  • 1392 Views
  • 2 replies
  • 8 kudos
Latest Reply
Vartika
Databricks Employee
  • 8 kudos

Hi @Hubert Dudek​,I wanted to take a moment to express our gratitude for sharing your valuable insights and information with us. Thank you for taking the time to share your thoughts with us. We truly appreciate your contribution.You are awesome!Cheer...

  • 8 kudos
1 More Replies
JLSy
by New Contributor III
  • 14952 Views
  • 5 replies
  • 6 kudos

cannot convert Parquet type INT64 to Photon type string

I am receiving an error similar to the post in this link: https://community.databricks.com/s/question/0D58Y00009d8h4tSAA/cannot-convert-parquet-type-int64-to-photon-type-doubleHowever, instead of type double the error message states that the type can...

  • 14952 Views
  • 5 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

@John Laurence Sy​ :It sounds like you are encountering a schema conversion error when trying to read in a Parquet file that contains an INT64 column that cannot be converted to a string type. This error can occur when the Parquet file has a schema t...

  • 6 kudos
4 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels