cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

i am trying to read csv file using databricks, i am getting error like ......FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/tables/world_bank.csv'

Venky
New Contributor III

i am trying to read csv file using databricks, i am getting error like ......FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/tables/world_bank.csv'

image

18 REPLIES 18

-werners-
Esteemed Contributor III

I see you use pandas to read from dbfs.

But pandas will only read from local files,

see this topic also. It is about databricks-connect but the same principles apply.

So what you should do is first read the file using spark.read.csv and then converting the spark df to a pandas df.

Eagle78
New Contributor III

I had the same issue: geopandas in Databricks notebooks does not open shapefiles from an Azure Storage mount.
I managed to copy the shapefile to the Databricks workspace using 

 

dbutils.fs.cp(shapefile_path, f"file:{local_shapefile_copy_dest_path}") 

The 'file:' prefix proved to be crucial here.

and then: 

gdf = gpd.read_file(shapefile_path.replace('dbfs:', ''))
gdf.display()

I copy the results back to the dbfs mount using

dbutils.fs.cp(f"file:{geoparquet_path}", f"{raw_path}{geoparquet_file_basename}.parquet")

 


@-werners- wrote:

I see you use pandas to read from dbfs.

But pandas will only read from local files,

see this topic also. It is about databricks-connect but the same principles apply.

So what you should do is first read the file using spark.read.csv and then converting the spark df to a pandas df.



@-werners- wrote:

I see you use pandas to read from dbfs.

But pandas will only read from local files,

see this topic also. It is about databricks-connect but the same principles apply.

So what you should do is first read the file using spark.read.csv and then converting the spark df to a pandas df.






Eagle78
New Contributor III

Ik convert to parquet using

gdf.to_parquet(f"/dbfs{raw_path}/{file_name}.parquet") 

 

Alexis
New Contributor III

Hi

you can try:

my_df = spark.read.format("csv")

      .option("inferSchema","true")  # to get the types from your data

      .option("sep",",")            # if your file is using "," as separator

      .option("header","true")       # if your file have the header in the first row

      .load("/FileStore/tables/CREDIT_1.CSV")

display(my_df)

from above you can see that my_df is a spark dataframe and from there you can start with you code.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group