cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

guangyi
by Contributor III
  • 959 Views
  • 1 replies
  • 0 kudos

How to create a single CSV file with specified file name Spark in Databricks?

I know how to use Spark in Databricks to create a CSV, but it always has lots of side effects.For example, here is my code:file_path = “dbfs:/mnt/target_folder/file.csv”df.write.mode("overwrite").csv(file_path, header=True)Then what I got isA folder ...

  • 959 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @guangyi ,To disable _commited_xxx, _started_xxx and _SUCCSSS you must set below spark options:  spark.conf.set("spark.databricks.io.directoryCommit.createSuccessFile","false") spark.conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "f...

  • 0 kudos
TylerTamasaucka
by New Contributor II
  • 28533 Views
  • 5 replies
  • 2 kudos

org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'

I am trying to create a JAR for a Azure Databricks job but some code that works when using the notebook interface does not work when calling the library through a job. The weird part is that the job will complete the first run successfully but on an...

  • 28533 Views
  • 5 replies
  • 2 kudos
Latest Reply
skaja
New Contributor II
  • 2 kudos

I am facing similar issue when trying to use from_utc_timestamp function. I am able to call the function from databricks notebook but when I use the same function inside my java jar and running as a job in databricks, it is giving below error. Analys...

  • 2 kudos
4 More Replies
User16826987838
by Contributor
  • 3471 Views
  • 2 replies
  • 0 kudos
  • 3471 Views
  • 2 replies
  • 0 kudos
Latest Reply
VivekChandran
New Contributor II
  • 0 kudos

Yes! Cluster's owner/creator can be changed with the REST API - POST /api/2.1/clusters/change-ownerRequest Body sample:{ "cluster_id": "string", "owner_username": "string" }Ref: Clusters API | Change cluster ownerHope this helps!

  • 0 kudos
1 More Replies
RamenBhar
by New Contributor II
  • 1045 Views
  • 4 replies
  • 0 kudos

How to solve udf performance issue with databricks sql function?

Hi,I am dealing with a situation where i need to secure data at rest on storage (azure data lake), hence saving the data as encrypted text into the delta table. While serving, i want to create dynamic view which will be created from the delta table a...

  • 1045 Views
  • 4 replies
  • 0 kudos
Latest Reply
pavlosskev
New Contributor III
  • 0 kudos

I don't clearly understand your full problem. But I do know the following regarding UDFs:1. PySpark UDFs are extremely slow, because it needs to deserialize the Java Object (DataFrame), transform it with the Python UDF, then Serialize it back. This h...

  • 0 kudos
3 More Replies
JonLaRose
by New Contributor III
  • 560 Views
  • 2 replies
  • 0 kudos

Unity Catalog external tables

What are the consistency guarantees that DataBricks supply for multi writers, given that the written table is an external table?Are they different from the consistency guarantees given for managed tables? Thanks!

  • 560 Views
  • 2 replies
  • 0 kudos
Latest Reply
JonLaRose
New Contributor III
  • 0 kudos

Thank you @Ajay-Pandey, that is helpful.One thing that I'm not sure about is how does DataBricks can use the same ACID mechanism that external tools use with the external tables? For example, if an external Spark cluster write Delta Logs with a LogSt...

  • 0 kudos
1 More Replies
ashraf1395
by Valued Contributor
  • 1137 Views
  • 1 replies
  • 0 kudos

Resolved! Authentication Issue while connecting to Databricks using Looker Studio

So previously I created source connections from looker with Databricks using my personal access token.I followed this databricks docs. https://docs.databricks.com/en/partners/bi/looker-studio.htmlBut from 10 July, I think basic authentication has bee...

ashraf1395_0-1723631031231.png ashraf1395_1-1723631308249.png ashraf1395_2-1723631479463.png
  • 1137 Views
  • 1 replies
  • 0 kudos
Latest Reply
menotron
Valued Contributor
  • 0 kudos

Hi,You would still connect using OAuth tokens. It is just that Databricks recommends using personal access tokens belonging to service principals instead of workspace users. To create tokens for service principals, see Manage tokens for a service pri...

  • 0 kudos
jiteshraut20
by New Contributor III
  • 1811 Views
  • 8 replies
  • 1 kudos

Resolved! Question: Issue with Overwatch Deployment on Databricks (on AWS) - Missing Tables in Gold Schema

Hi all,I'm working on setting up Overwatch in our Databricks workspace to monitor resources, and I've encountered an issue during overwatch deployment. I am able to deploy overwatch, but the validation for the `Gold_jobRunCostPotentialFact` module fa...

  • 1811 Views
  • 8 replies
  • 1 kudos
Latest Reply
SriramMohanty
Databricks Employee
  • 1 kudos

Hi @jiteshraut20 , 1) Storage_prefix: It is updated in the documents please reffer config . 2)If the system table is in use, the recommended Databricks runtime version is 13.3 LTS. For other cases, 11.3 LTS should work seamlessly. Please see the docu...

  • 1 kudos
7 More Replies
Data_Engineer3
by Contributor III
  • 2611 Views
  • 2 replies
  • 6 kudos

Getting error popup in databricks

when i migrated to new databricks workspace, I am getting error popup message continuously and also indentation what I changed it is getting changed to other value every with new login .

image
  • 2611 Views
  • 2 replies
  • 6 kudos
Latest Reply
Sivagurunathann
New Contributor II
  • 6 kudos

Hi I am facing this issue session expired pop-ups frequently every 3 minutes while I start working on databricks.

  • 6 kudos
1 More Replies
ggsmith
by New Contributor III
  • 601 Views
  • 1 replies
  • 0 kudos

Resolved! Question: Decrypt many files with UDF

I have around 20 pgp files in a folder in my volume that I need to decrypt. I have a decryption function that accepts a file name and writes the decrypted file to a new folder in the same volume. I had thought I could create a spark dataframe with th...

  • 601 Views
  • 1 replies
  • 0 kudos
Latest Reply
Brahmareddy
Honored Contributor
  • 0 kudos

Spark's error happens because the worker nodes can't access your local files. Instead of using Spark to decrypt, try doing it outside of Spark using Python's multiprocessing or a simple batch script for parallel processing. Another option is to move ...

  • 0 kudos
ramdasp1
by New Contributor
  • 662 Views
  • 3 replies
  • 2 kudos

Delta Table Properties

Hi When I look at the properties of a delta table I see these two properties that are set to a value of 1.I went through the manual for these properties and this is what the manual says.delta.minReaderVersion: The minimum required protocol reader ver...

  • 662 Views
  • 3 replies
  • 2 kudos
Latest Reply
Brahmareddy
Honored Contributor
  • 2 kudos

Hi Ramdas1,Let me explain you in simple terms with an example.A Delta table is like a special book for data. delta.minReaderVersion is the minimum version of the "reader" you need to open and read the book, while delta.minWriterVersion is the minimum...

  • 2 kudos
2 More Replies
Megan05
by New Contributor III
  • 3538 Views
  • 5 replies
  • 4 kudos

Resolved! Out of Memory/Connection Lost When Writing to External SQL Server from Databricks Using JDBC Connection

I am working on writing a large amount of data from Databricks to an external SQL server using a JDB connection. I keep getting timeout errors/connection lost but digging deeper it appears to be a memory problem. I am wondering what cluster configura...

  • 3538 Views
  • 5 replies
  • 4 kudos
Latest Reply
hotrabattecom
New Contributor II
  • 4 kudos

Thanks for the answer. I am also get in this problem. Hotrabatt

  • 4 kudos
4 More Replies
yvishal519
by Contributor
  • 4602 Views
  • 3 replies
  • 0 kudos

Resolved! Implementing Full Load Strategy with Delta Live Tables and Unity Catalog

Hello Databricks Community,I am seeking guidance on handling full load scenarios with Delta Live Tables (DLT) and Unity Catalog. Here’s the situation I’m dealing with:We have a data folder in Azure Data Lake Storage (ADLS) where we use Auto Loader to...

  • 4602 Views
  • 3 replies
  • 0 kudos
Latest Reply
yvishal519
Contributor
  • 0 kudos

To efficiently manage full data loads, we can leverage a regex pattern to dynamically identify the latest data folders within our bronze layer. These folders typically contain the most recent data updates for our tables. By using a Python script, we ...

  • 0 kudos
2 More Replies
novytskyi
by New Contributor
  • 283 Views
  • 0 replies
  • 0 kudos

Timeout for dbutils.jobs.taskValues.set(key, value)

I have a job that call notebook with dbutils.jobs.taskValues.set(key, value) method and assigns around 20 parameters.When I run it - it works.But when I try to call 2 or more copies of a job with different parameters - it fails with error on differen...

  • 283 Views
  • 0 replies
  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels