cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

i am trying to read csv file using databricks, i am getting error like ......FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/tables/world_bank.csv'

Venky
New Contributor III

i am trying to read csv file using databricks, i am getting error like ......FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/tables/world_bank.csv'

image

21 REPLIES 21

Hubert-Dudek
Esteemed Contributor III

I see that you are using databricks-course-cluster which have probably some limited functionality. Not sure where dbfs is mounted there. When you are using dbutils it display path for dbfs mount (dbfs file system).

Please use spark code instead of pandas so it will be executed properly:

df = spark.read.csv('dbfs:/FileStore/tables/world_bank.csv')
display(df)

Alexis
New Contributor III

ops I didn't see the other answers, anyway here you have how to use %fs magic to do the same that dbutils.fs.ls() utils.

Just before to create the spark data frame, check if the file exists in the mentioned path.

You can use the %fs magic like this:

fs_magic

klllmmm
New Contributor II

Pls help,

I have the same problem

image

-werners-
Esteemed Contributor III

I see you use pandas to read from dbfs.

But pandas will only read from local files,

see this topic also. It is about databricks-connect but the same principles apply.

So what you should do is first read the file using spark.read.csv and then converting the spark df to a pandas df.

Eagle78
New Contributor III

I had the same issue: geopandas in Databricks notebooks does not open shapefiles from an Azure Storage mount.
I managed to copy the shapefile to the Databricks workspace using 

 

dbutils.fs.cp(shapefile_path, f"file:{local_shapefile_copy_dest_path}") 

The 'file:' prefix proved to be crucial here.

and then: 

gdf = gpd.read_file(shapefile_path.replace('dbfs:', ''))
gdf.display()

I copy the results back to the dbfs mount using

dbutils.fs.cp(f"file:{geoparquet_path}", f"{raw_path}{geoparquet_file_basename}.parquet")

 


@-werners- wrote:

I see you use pandas to read from dbfs.

But pandas will only read from local files,

see this topic also. It is about databricks-connect but the same principles apply.

So what you should do is first read the file using spark.read.csv and then converting the spark df to a pandas df.



@-werners- wrote:

I see you use pandas to read from dbfs.

But pandas will only read from local files,

see this topic also. It is about databricks-connect but the same principles apply.

So what you should do is first read the file using spark.read.csv and then converting the spark df to a pandas df.






Eagle78
New Contributor III

Ik convert to parquet using

gdf.to_parquet(f"/dbfs{raw_path}/{file_name}.parquet") 

 

Alexis
New Contributor III

Hi

you can try:

my_df = spark.read.format("csv")

      .option("inferSchema","true")  # to get the types from your data

      .option("sep",",")            # if your file is using "," as separator

      .option("header","true")       # if your file have the header in the first row

      .load("/FileStore/tables/CREDIT_1.CSV")

display(my_df)

from above you can see that my_df is a spark dataframe and from there you can start with you code.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group