cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

SparkFiles - strange behavior on Azure databricks (runtime 10)

Hubert-Dudek
Esteemed Contributor III

When you use:

from pyspark import SparkFiles
spark.sparkContext.addFile(url)

it adds file to NON dbfs /local_disk0/ but then when you want to read file:

spark.read.json(SparkFiles.get("file_name"))

it wants to read it from /dbfs/local_disk0/. I tried also with file:// and many other creative ways and it doesn't work.

Of course it is working after using %sh cp - moving from /local_disk0/ to /dbfs/local_disk0/ .

It seems to be a bug like addFile was switched to dbfs on azure databricks but SparkFiles not (in original spark it addFile and gets to/from workers).

I couldn't find also any settings to manually specify RootDirectory for SparkFiles.

23 REPLIES 23

weldermartins
Honored Contributor

Hi @Kaniz Fatmaโ€‹, Ticket Number: #00125834.

It's been over a month since the ticket was opened, but still no response.

I tested it now with version 3.2.0 of Apache Spark on the Azure platform, it continues the same way with the message: "File not found". But in community.cloud.databricks the path is found and returns the expected result.

weldermartins
Honored Contributor
municipios = "https://servicodados.ibge.gov.br/api/v1/localidades/municipios"
from pyspark import SparkFiles
spark.sparkContext.addFile(municipios)
 
municipiosDF = spark.read.option("multiLine", True).option("mode", "OVERRIDE").json("file://"+SparkFiles.get("municipios"))

I did not understand.

Please change the code above as instructed by you. @Kaniz Fatmaโ€‹ 

att,

Welder Martins

weldermartins
Honored Contributor

Hi @Kaniz Fatma (Databricks), it ran without errors. The problem is that SparkFiles doesn't work on the Azure platform. I'm extracting data from the API with other functionality. I'm even using the URLLIB function palliatively. RDD will be deprecated as of Apache Spark version 3.0.

Thak's.

weldermartins
Honored Contributor

@Kaniz Fatmaโ€‹  hi, do you have access to orders that were opened in Databricks? The Ticket was opened in December 2021 and so far they have not commented on the deadline. Thanks.

User16764241763
Honored Contributor

@Hubert Dudekโ€‹ 

Have to tried with file:/// ?

I remember starting Spark 3.2, it honors the native hadoop file system if no file access protocol is defined.

Hubert-Dudek
Esteemed Contributor III

Hi it was few months ago. I need to check it again with new DR.

Hubert-Dudek
Esteemed Contributor III

I confirm that as @Arvind Ravishโ€‹ said adding file:/// is solving the problem.

image.png

Hey,

But will this allocated address change? it would have to work according to the community. But thanks for the feedback.

Hubert-Dudek
Esteemed Contributor III

polished syntax a bit:image

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group