Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Hi All.. Need your help in this issue what i am facing. Currently we are using data bricks as a platform to build pipeline and execute our talend ETL sqls converted into the spark sql framework as we were facing issues in loading the history data int...
Below are the steps we have implemented to login through SSO.1. We have setup SSO and are able to login into Databricks using IDP (SiemensID Authentication). 2. After successful authentication, we have done the configuration of redirecting user to da...
Hello @Kaniz Fatma @Debayan Mukherjee Thanks for the response.We have raised this issue with Databricks team internally. We have shared the details with team. I will post the solution once we find any breakthrough to resolve it.
py4j.security.Py4JSecurityException: Method public org.apache.spark.sql.streaming.DataStreamReader org.apache.spark.sql.SQLContext.readStream() is not whitelisted on class class org.apache.spark.sql.SQLContextI already disable acl for cluster using "...
Hi @Ravi Teja,Just a friendly follow-up. Do you still need help? if you do, please share more details, like DBR version, standard or High concurrency cluster? etc
Hi Team , So last year I acquired SQL Analyst Associate badge and due for renew this Jan 2023 . However when checked in Databricks Academy couldn't find the course . So has it been retired or removed ? If exists can someone help me with the course d...
I have a streaming pipeline that ingests json files from a data lake using autoloader. These files are dumped there periodically. Mostly the files contain duplicate data, but there are occasional changes. I am trying to process these files into a dat...
For clarity, here is the final code that avoids duplicates, using @Suteja Kanuri 's suggestion:import dlt
@dlt.table
def currStudents_dedup():
df = spark.readStream.format("delta").table("live.currStudents_ingest")
return (
df.drop...
Whenever using the displayHTML method or any python library that requires rendering HTML we get the following error in the results: Uncaught SyntaxError: Invalid or unexpected tokenWe cannot reproduce this error reliably, and resizing the html window...
Hi, If you could confirm the whole error stack will help us understanding the issue little clear. Also, please tag @Debayan with your next response which will notify me. Thank you!
I have the following error code in databricks when I want to unzip filesFileNotFoundError: [Errno 2] No such file or directory: but the file is there I already tried several ways and nothing worksI have tried modifying by placing/dbfs/mnt/dbfs/mnt/d...
So i am going to keep this generic as to all cloud provider storage options as its relevant across the board, (GCS, S3 and blob store). Nothing is mentioned in docs as far as i can see. Is there a use case against enabling object versioning in cloud ...
Hi @Matt User Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...
Hi team, Good evening today I got problem while taking the exam my exam is @11:30 but some audio problem it's got reschedule @12:45 again also I faced problem ,question was some time appears and some time it's not so, because this I can't able to ta...
Hi @S Meghala Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...
Hello,Concerning Autoloader (based on https://docs.databricks.com/ingestion/auto-loader/schema.html), so far what I understand is when it detects a schema update, the stream fails and I have to rerun it to make it works, it's ok.But once I rerun it, ...
Hi @Lucien Arrio Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thank...
Dear Community members,We want to extend our sincere gratitude for attending the Community event - March series on March 31st 2023. Your presence made the event a huge success, and we appreciate the time you took to join us. We were thrilled to hear ...
@Suteja Kanuri Hi Suteja. Great initiative. Please plan a common timezone between India and UK/EUR/US so that we can also attend. BTW is there any recorded session that we can go through?
What is the best practice for accelerating queries which looks like the following?win = Window.partitionBy('key1','key2').orderBy('timestamp')
df.select('timestamp', (F.col('col1') - F.lag('col1').over(win)).alias('col1_diff'))I have tried to use OP...
Hi @Hanan Shteingart Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answ...
Hi @Machireddy Nikitha Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best an...
I am running jupyter notebook on a cluster with configuration: 12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12)Worker type: i3.xlarge 30.5gb memory, 4 coresMin 2 and max 8 workers cursor = conn.cursor()
cursor.execute(
"""
...
Hi, Could you please confirm the usage of your cluster while running this job? you can monitor the performance here: https://docs.databricks.com/clusters/clusters-manage.html#monitor-performance with different metrics. Also, please tag @Debayan with...